I0519 21:12:19.858792 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0519 21:12:19.859136 6 e2e.go:109] Starting e2e run "35f88040-4317-4d4c-8836-ccfcd5f0b04b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589922738 - Will randomize all specs Will run 278 of 4842 specs May 19 21:12:19.918: INFO: >>> kubeConfig: /root/.kube/config May 19 21:12:19.920: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 19 21:12:19.945: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 19 21:12:19.974: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 19 21:12:19.974: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 19 21:12:19.974: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 19 21:12:19.987: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 19 21:12:19.987: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 19 21:12:19.987: INFO: e2e test version: v1.17.4 May 19 21:12:19.988: INFO: kube-apiserver version: v1.17.2 May 19 21:12:19.988: INFO: >>> kubeConfig: /root/.kube/config May 19 21:12:19.994: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:12:19.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 19 21:12:20.064: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 19 21:12:20.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 19 21:12:20.288: INFO: stderr: "" May 19 21:12:20.288: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:12:20.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6133" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":1,"skipped":23,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:12:20.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0519 21:13:00.741088 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:13:00.741: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:00.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6128" for this suite. • [SLOW TEST:40.450 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":2,"skipped":28,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:00.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6ecf741e-61f7-4581-bbd3-d6c785dfc045 STEP: Creating a pod to test consume configMaps May 19 21:13:00.868: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea" in namespace "projected-4209" to be "success or failure" May 19 21:13:00.874: INFO: Pod "pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea": Phase="Pending", Reason="", readiness=false. Elapsed: 5.862485ms May 19 21:13:02.878: INFO: Pod "pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00977976s May 19 21:13:04.881: INFO: Pod "pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013188013s STEP: Saw pod success May 19 21:13:04.881: INFO: Pod "pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea" satisfied condition "success or failure" May 19 21:13:04.884: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea container projected-configmap-volume-test: STEP: delete the pod May 19 21:13:04.958: INFO: Waiting for pod pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea to disappear May 19 21:13:04.966: INFO: Pod pod-projected-configmaps-bfb2ba7f-82d1-4f0e-a55f-bf514f1932ea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:04.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4209" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":36,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:04.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 19 21:13:05.150: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 21:13:05.173: INFO: Waiting for terminating namespaces to be deleted... May 19 21:13:05.195: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 19 21:13:05.200: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:13:05.200: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:13:05.200: INFO: simpletest.rc-7gjvv from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.200: INFO: simpletest.rc-wknhs from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.200: INFO: simpletest.rc-8cwqh from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.200: INFO: simpletest.rc-qb4dq from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.200: INFO: simpletest.rc-495tm from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.200: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.200: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 19 21:13:05.223: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:13:05.224: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container kube-hunter ready: false, restart count 0 May 19 21:13:05.224: INFO: simpletest.rc-5ncqs from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.224: INFO: simpletest.rc-mkwz6 from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.224: INFO: simpletest.rc-rvtp7 from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.224: INFO: simpletest.rc-mbk4k from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container nginx ready: true, restart count 0 May 19 21:13:05.224: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:13:05.224: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container kube-bench ready: false, restart count 0 May 19 21:13:05.224: INFO: simpletest.rc-62sln from gc-6128 started at 2020-05-19 21:12:20 +0000 UTC (1 container statuses recorded) May 19 21:13:05.224: INFO: Container nginx ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16108a5972a06e0a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16108a597372b405], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:06.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9018" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":4,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:06.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-7270 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7270 to expose endpoints map[] May 19 21:13:06.447: INFO: successfully validated that service multi-endpoint-test in namespace services-7270 exposes endpoints map[] (16.13632ms elapsed) STEP: Creating pod pod1 in namespace services-7270 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7270 to expose endpoints map[pod1:[100]] May 19 21:13:10.766: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.307640881s elapsed, will retry) May 19 21:13:12.018: INFO: successfully validated that service multi-endpoint-test in namespace services-7270 exposes endpoints map[pod1:[100]] (5.559223386s elapsed) STEP: Creating pod pod2 in namespace services-7270 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7270 to expose endpoints map[pod1:[100] pod2:[101]] May 19 21:13:15.235: INFO: successfully validated that service multi-endpoint-test in namespace services-7270 exposes endpoints map[pod1:[100] pod2:[101]] (3.206674715s elapsed) STEP: Deleting pod pod1 in namespace services-7270 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7270 to expose endpoints map[pod2:[101]] May 19 21:13:16.283: INFO: successfully validated that service multi-endpoint-test in namespace services-7270 exposes endpoints map[pod2:[101]] (1.043718771s elapsed) STEP: Deleting pod pod2 in namespace services-7270 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7270 to expose endpoints map[] May 19 21:13:17.313: INFO: successfully validated that service multi-endpoint-test in namespace services-7270 exposes endpoints map[] (1.025563778s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:17.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7270" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.128 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":5,"skipped":86,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:17.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 21:13:17.508: INFO: Waiting up to 5m0s for pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1" in namespace "emptydir-5893" to be "success or failure" May 19 21:13:17.515: INFO: Pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.848869ms May 19 21:13:19.520: INFO: Pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011717992s May 19 21:13:21.524: INFO: Pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016005653s May 19 21:13:23.528: INFO: Pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01984269s STEP: Saw pod success May 19 21:13:23.528: INFO: Pod "pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1" satisfied condition "success or failure" May 19 21:13:23.531: INFO: Trying to get logs from node jerma-worker2 pod pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1 container test-container: STEP: delete the pod May 19 21:13:23.578: INFO: Waiting for pod pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1 to disappear May 19 21:13:23.593: INFO: Pod pod-5d92f7a0-e393-4dc9-8fcb-a38b3e0059f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:23.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5893" for this suite. • [SLOW TEST:6.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:23.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 19 21:13:23.701: INFO: Pod name pod-release: Found 0 pods out of 1 May 19 21:13:28.705: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:28.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-264" for this suite. • [SLOW TEST:5.172 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":7,"skipped":185,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:28.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:13:28.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5650' May 19 21:13:32.739: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 21:13:32.739: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller May 19 21:13:32.772: INFO: scanned /root for discovery docs: May 19 21:13:32.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5650' May 19 21:13:49.669: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 21:13:49.669: INFO: stdout: "Created e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335\nScaling up e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 19 21:13:49.669: INFO: stdout: "Created e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335\nScaling up e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 19 21:13:49.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5650' May 19 21:13:49.765: INFO: stderr: "" May 19 21:13:49.765: INFO: stdout: "e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335-lnwr7 " May 19 21:13:49.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335-lnwr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5650' May 19 21:13:49.864: INFO: stderr: "" May 19 21:13:49.864: INFO: stdout: "true" May 19 21:13:49.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335-lnwr7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5650' May 19 21:13:49.961: INFO: stderr: "" May 19 21:13:49.961: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 19 21:13:49.961: INFO: e2e-test-httpd-rc-d9c6d62f85ce132718aa50e837b64335-lnwr7 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 19 21:13:49.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5650' May 19 21:13:50.068: INFO: stderr: "" May 19 21:13:50.068: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:50.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5650" for this suite. • [SLOW TEST:21.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":8,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:50.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:13:50.168: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af" in namespace "security-context-test-4692" to be "success or failure" May 19 21:13:50.179: INFO: Pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af": Phase="Pending", Reason="", readiness=false. Elapsed: 10.984407ms May 19 21:13:52.183: INFO: Pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015187817s May 19 21:13:54.187: INFO: Pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019429549s May 19 21:13:54.187: INFO: Pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af" satisfied condition "success or failure" May 19 21:13:54.193: INFO: Got logs for pod "busybox-privileged-false-6f8301a4-af68-4a2c-9206-7f08961053af": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:13:54.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4692" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:13:54.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0519 21:14:06.958992 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:14:06.959: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:06.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9851" for this suite. • [SLOW TEST:12.764 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":10,"skipped":244,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:06.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 19 21:14:19.865: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:19.865: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:19.900014 6 log.go:172] (0xc002971c30) (0xc001bf8c80) Create stream I0519 21:14:19.900047 6 log.go:172] (0xc002971c30) (0xc001bf8c80) Stream added, broadcasting: 1 I0519 21:14:19.902098 6 log.go:172] (0xc002971c30) Reply frame received for 1 I0519 21:14:19.902129 6 log.go:172] (0xc002971c30) (0xc0023b4460) Create stream I0519 21:14:19.902139 6 log.go:172] (0xc002971c30) (0xc0023b4460) Stream added, broadcasting: 3 I0519 21:14:19.902916 6 log.go:172] (0xc002971c30) Reply frame received for 3 I0519 21:14:19.902971 6 log.go:172] (0xc002971c30) (0xc0023b45a0) Create stream I0519 21:14:19.902985 6 log.go:172] (0xc002971c30) (0xc0023b45a0) Stream added, broadcasting: 5 I0519 21:14:19.903811 6 log.go:172] (0xc002971c30) Reply frame received for 5 I0519 21:14:19.976802 6 log.go:172] (0xc002971c30) Data frame received for 3 I0519 21:14:19.976841 6 log.go:172] (0xc0023b4460) (3) Data frame handling I0519 21:14:19.976870 6 log.go:172] (0xc0023b4460) (3) Data frame sent I0519 21:14:19.987572 6 log.go:172] (0xc002971c30) Data frame received for 3 I0519 21:14:19.987597 6 log.go:172] (0xc0023b4460) (3) Data frame handling I0519 21:14:19.987628 6 log.go:172] (0xc002971c30) Data frame received for 5 I0519 21:14:19.987639 6 log.go:172] (0xc0023b45a0) (5) Data frame handling I0519 21:14:19.989683 6 log.go:172] (0xc002971c30) Data frame received for 1 I0519 21:14:19.989714 6 log.go:172] (0xc001bf8c80) (1) Data frame handling I0519 21:14:19.989735 6 log.go:172] (0xc001bf8c80) (1) Data frame sent I0519 21:14:19.989760 6 log.go:172] (0xc002971c30) (0xc001bf8c80) Stream removed, broadcasting: 1 I0519 21:14:19.989790 6 log.go:172] (0xc002971c30) Go away received I0519 21:14:19.990087 6 log.go:172] (0xc002971c30) (0xc001bf8c80) Stream removed, broadcasting: 1 I0519 21:14:19.990099 6 log.go:172] (0xc002971c30) (0xc0023b4460) Stream removed, broadcasting: 3 I0519 21:14:19.990105 6 log.go:172] (0xc002971c30) (0xc0023b45a0) Stream removed, broadcasting: 5 May 19 21:14:19.990: INFO: Exec stderr: "" May 19 21:14:19.990: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:19.990: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.022300 6 log.go:172] (0xc0025bdef0) (0xc001bf8fa0) Create stream I0519 21:14:20.022332 6 log.go:172] (0xc0025bdef0) (0xc001bf8fa0) Stream added, broadcasting: 1 I0519 21:14:20.024784 6 log.go:172] (0xc0025bdef0) Reply frame received for 1 I0519 21:14:20.024817 6 log.go:172] (0xc0025bdef0) (0xc0023b4640) Create stream I0519 21:14:20.024829 6 log.go:172] (0xc0025bdef0) (0xc0023b4640) Stream added, broadcasting: 3 I0519 21:14:20.025940 6 log.go:172] (0xc0025bdef0) Reply frame received for 3 I0519 21:14:20.025971 6 log.go:172] (0xc0025bdef0) (0xc0023b46e0) Create stream I0519 21:14:20.025982 6 log.go:172] (0xc0025bdef0) (0xc0023b46e0) Stream added, broadcasting: 5 I0519 21:14:20.026725 6 log.go:172] (0xc0025bdef0) Reply frame received for 5 I0519 21:14:20.084771 6 log.go:172] (0xc0025bdef0) Data frame received for 5 I0519 21:14:20.084811 6 log.go:172] (0xc0023b46e0) (5) Data frame handling I0519 21:14:20.084835 6 log.go:172] (0xc0025bdef0) Data frame received for 3 I0519 21:14:20.084849 6 log.go:172] (0xc0023b4640) (3) Data frame handling I0519 21:14:20.084865 6 log.go:172] (0xc0023b4640) (3) Data frame sent I0519 21:14:20.084890 6 log.go:172] (0xc0025bdef0) Data frame received for 3 I0519 21:14:20.084903 6 log.go:172] (0xc0023b4640) (3) Data frame handling I0519 21:14:20.086442 6 log.go:172] (0xc0025bdef0) Data frame received for 1 I0519 21:14:20.086475 6 log.go:172] (0xc001bf8fa0) (1) Data frame handling I0519 21:14:20.086507 6 log.go:172] (0xc001bf8fa0) (1) Data frame sent I0519 21:14:20.086534 6 log.go:172] (0xc0025bdef0) (0xc001bf8fa0) Stream removed, broadcasting: 1 I0519 21:14:20.086571 6 log.go:172] (0xc0025bdef0) Go away received I0519 21:14:20.086704 6 log.go:172] (0xc0025bdef0) (0xc001bf8fa0) Stream removed, broadcasting: 1 I0519 21:14:20.086765 6 log.go:172] (0xc0025bdef0) (0xc0023b4640) Stream removed, broadcasting: 3 I0519 21:14:20.086833 6 log.go:172] (0xc0025bdef0) (0xc0023b46e0) Stream removed, broadcasting: 5 May 19 21:14:20.086: INFO: Exec stderr: "" May 19 21:14:20.086: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.086: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.119928 6 log.go:172] (0xc0019486e0) (0xc001bf9400) Create stream I0519 21:14:20.119962 6 log.go:172] (0xc0019486e0) (0xc001bf9400) Stream added, broadcasting: 1 I0519 21:14:20.122582 6 log.go:172] (0xc0019486e0) Reply frame received for 1 I0519 21:14:20.122609 6 log.go:172] (0xc0019486e0) (0xc002366000) Create stream I0519 21:14:20.122621 6 log.go:172] (0xc0019486e0) (0xc002366000) Stream added, broadcasting: 3 I0519 21:14:20.123393 6 log.go:172] (0xc0019486e0) Reply frame received for 3 I0519 21:14:20.123418 6 log.go:172] (0xc0019486e0) (0xc0023660a0) Create stream I0519 21:14:20.123425 6 log.go:172] (0xc0019486e0) (0xc0023660a0) Stream added, broadcasting: 5 I0519 21:14:20.124200 6 log.go:172] (0xc0019486e0) Reply frame received for 5 I0519 21:14:20.200839 6 log.go:172] (0xc0019486e0) Data frame received for 5 I0519 21:14:20.200887 6 log.go:172] (0xc0023660a0) (5) Data frame handling I0519 21:14:20.200915 6 log.go:172] (0xc0019486e0) Data frame received for 3 I0519 21:14:20.200931 6 log.go:172] (0xc002366000) (3) Data frame handling I0519 21:14:20.200949 6 log.go:172] (0xc002366000) (3) Data frame sent I0519 21:14:20.200963 6 log.go:172] (0xc0019486e0) Data frame received for 3 I0519 21:14:20.200974 6 log.go:172] (0xc002366000) (3) Data frame handling I0519 21:14:20.202530 6 log.go:172] (0xc0019486e0) Data frame received for 1 I0519 21:14:20.202547 6 log.go:172] (0xc001bf9400) (1) Data frame handling I0519 21:14:20.202557 6 log.go:172] (0xc001bf9400) (1) Data frame sent I0519 21:14:20.202669 6 log.go:172] (0xc0019486e0) (0xc001bf9400) Stream removed, broadcasting: 1 I0519 21:14:20.202785 6 log.go:172] (0xc0019486e0) (0xc001bf9400) Stream removed, broadcasting: 1 I0519 21:14:20.202802 6 log.go:172] (0xc0019486e0) (0xc002366000) Stream removed, broadcasting: 3 I0519 21:14:20.202956 6 log.go:172] (0xc0019486e0) (0xc0023660a0) Stream removed, broadcasting: 5 May 19 21:14:20.202: INFO: Exec stderr: "" I0519 21:14:20.203000 6 log.go:172] (0xc0019486e0) Go away received May 19 21:14:20.203: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.203: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.234751 6 log.go:172] (0xc00229e370) (0xc002366320) Create stream I0519 21:14:20.234792 6 log.go:172] (0xc00229e370) (0xc002366320) Stream added, broadcasting: 1 I0519 21:14:20.238313 6 log.go:172] (0xc00229e370) Reply frame received for 1 I0519 21:14:20.238364 6 log.go:172] (0xc00229e370) (0xc0023663c0) Create stream I0519 21:14:20.238384 6 log.go:172] (0xc00229e370) (0xc0023663c0) Stream added, broadcasting: 3 I0519 21:14:20.239179 6 log.go:172] (0xc00229e370) Reply frame received for 3 I0519 21:14:20.239223 6 log.go:172] (0xc00229e370) (0xc0022e40a0) Create stream I0519 21:14:20.239238 6 log.go:172] (0xc00229e370) (0xc0022e40a0) Stream added, broadcasting: 5 I0519 21:14:20.240300 6 log.go:172] (0xc00229e370) Reply frame received for 5 I0519 21:14:20.305498 6 log.go:172] (0xc00229e370) Data frame received for 5 I0519 21:14:20.305567 6 log.go:172] (0xc0022e40a0) (5) Data frame handling I0519 21:14:20.305604 6 log.go:172] (0xc00229e370) Data frame received for 3 I0519 21:14:20.305625 6 log.go:172] (0xc0023663c0) (3) Data frame handling I0519 21:14:20.305648 6 log.go:172] (0xc0023663c0) (3) Data frame sent I0519 21:14:20.305669 6 log.go:172] (0xc00229e370) Data frame received for 3 I0519 21:14:20.305687 6 log.go:172] (0xc0023663c0) (3) Data frame handling I0519 21:14:20.306731 6 log.go:172] (0xc00229e370) Data frame received for 1 I0519 21:14:20.306761 6 log.go:172] (0xc002366320) (1) Data frame handling I0519 21:14:20.306777 6 log.go:172] (0xc002366320) (1) Data frame sent I0519 21:14:20.306796 6 log.go:172] (0xc00229e370) (0xc002366320) Stream removed, broadcasting: 1 I0519 21:14:20.306829 6 log.go:172] (0xc00229e370) Go away received I0519 21:14:20.306974 6 log.go:172] (0xc00229e370) (0xc002366320) Stream removed, broadcasting: 1 I0519 21:14:20.307019 6 log.go:172] (0xc00229e370) (0xc0023663c0) Stream removed, broadcasting: 3 I0519 21:14:20.307043 6 log.go:172] (0xc00229e370) (0xc0022e40a0) Stream removed, broadcasting: 5 May 19 21:14:20.307: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 19 21:14:20.307: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.307: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.339160 6 log.go:172] (0xc001760370) (0xc0023b4a00) Create stream I0519 21:14:20.339189 6 log.go:172] (0xc001760370) (0xc0023b4a00) Stream added, broadcasting: 1 I0519 21:14:20.342159 6 log.go:172] (0xc001760370) Reply frame received for 1 I0519 21:14:20.342201 6 log.go:172] (0xc001760370) (0xc0022e41e0) Create stream I0519 21:14:20.342215 6 log.go:172] (0xc001760370) (0xc0022e41e0) Stream added, broadcasting: 3 I0519 21:14:20.343266 6 log.go:172] (0xc001760370) Reply frame received for 3 I0519 21:14:20.343324 6 log.go:172] (0xc001760370) (0xc001bf9540) Create stream I0519 21:14:20.343341 6 log.go:172] (0xc001760370) (0xc001bf9540) Stream added, broadcasting: 5 I0519 21:14:20.344145 6 log.go:172] (0xc001760370) Reply frame received for 5 I0519 21:14:20.427515 6 log.go:172] (0xc001760370) Data frame received for 5 I0519 21:14:20.427546 6 log.go:172] (0xc001bf9540) (5) Data frame handling I0519 21:14:20.427586 6 log.go:172] (0xc001760370) Data frame received for 3 I0519 21:14:20.427618 6 log.go:172] (0xc0022e41e0) (3) Data frame handling I0519 21:14:20.427645 6 log.go:172] (0xc0022e41e0) (3) Data frame sent I0519 21:14:20.427664 6 log.go:172] (0xc001760370) Data frame received for 3 I0519 21:14:20.427679 6 log.go:172] (0xc0022e41e0) (3) Data frame handling I0519 21:14:20.428943 6 log.go:172] (0xc001760370) Data frame received for 1 I0519 21:14:20.428971 6 log.go:172] (0xc0023b4a00) (1) Data frame handling I0519 21:14:20.428993 6 log.go:172] (0xc0023b4a00) (1) Data frame sent I0519 21:14:20.429012 6 log.go:172] (0xc001760370) (0xc0023b4a00) Stream removed, broadcasting: 1 I0519 21:14:20.429032 6 log.go:172] (0xc001760370) Go away received I0519 21:14:20.429347 6 log.go:172] (0xc001760370) (0xc0023b4a00) Stream removed, broadcasting: 1 I0519 21:14:20.429379 6 log.go:172] (0xc001760370) (0xc0022e41e0) Stream removed, broadcasting: 3 I0519 21:14:20.429388 6 log.go:172] (0xc001760370) (0xc001bf9540) Stream removed, broadcasting: 5 May 19 21:14:20.429: INFO: Exec stderr: "" May 19 21:14:20.429: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.429: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.460308 6 log.go:172] (0xc001db4420) (0xc0022e4460) Create stream I0519 21:14:20.460338 6 log.go:172] (0xc001db4420) (0xc0022e4460) Stream added, broadcasting: 1 I0519 21:14:20.463830 6 log.go:172] (0xc001db4420) Reply frame received for 1 I0519 21:14:20.463880 6 log.go:172] (0xc001db4420) (0xc001bf9720) Create stream I0519 21:14:20.463900 6 log.go:172] (0xc001db4420) (0xc001bf9720) Stream added, broadcasting: 3 I0519 21:14:20.465305 6 log.go:172] (0xc001db4420) Reply frame received for 3 I0519 21:14:20.465368 6 log.go:172] (0xc001db4420) (0xc0023b4aa0) Create stream I0519 21:14:20.465405 6 log.go:172] (0xc001db4420) (0xc0023b4aa0) Stream added, broadcasting: 5 I0519 21:14:20.466594 6 log.go:172] (0xc001db4420) Reply frame received for 5 I0519 21:14:20.527916 6 log.go:172] (0xc001db4420) Data frame received for 5 I0519 21:14:20.527953 6 log.go:172] (0xc0023b4aa0) (5) Data frame handling I0519 21:14:20.527987 6 log.go:172] (0xc001db4420) Data frame received for 3 I0519 21:14:20.528016 6 log.go:172] (0xc001bf9720) (3) Data frame handling I0519 21:14:20.528032 6 log.go:172] (0xc001bf9720) (3) Data frame sent I0519 21:14:20.528043 6 log.go:172] (0xc001db4420) Data frame received for 3 I0519 21:14:20.528055 6 log.go:172] (0xc001bf9720) (3) Data frame handling I0519 21:14:20.529009 6 log.go:172] (0xc001db4420) Data frame received for 1 I0519 21:14:20.529053 6 log.go:172] (0xc0022e4460) (1) Data frame handling I0519 21:14:20.529071 6 log.go:172] (0xc0022e4460) (1) Data frame sent I0519 21:14:20.529083 6 log.go:172] (0xc001db4420) (0xc0022e4460) Stream removed, broadcasting: 1 I0519 21:14:20.529099 6 log.go:172] (0xc001db4420) Go away received I0519 21:14:20.529340 6 log.go:172] (0xc001db4420) (0xc0022e4460) Stream removed, broadcasting: 1 I0519 21:14:20.529368 6 log.go:172] (0xc001db4420) (0xc001bf9720) Stream removed, broadcasting: 3 I0519 21:14:20.529375 6 log.go:172] (0xc001db4420) (0xc0023b4aa0) Stream removed, broadcasting: 5 May 19 21:14:20.529: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 19 21:14:20.529: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.529: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.579896 6 log.go:172] (0xc00229e6e0) (0xc002366500) Create stream I0519 21:14:20.579935 6 log.go:172] (0xc00229e6e0) (0xc002366500) Stream added, broadcasting: 1 I0519 21:14:20.583332 6 log.go:172] (0xc00229e6e0) Reply frame received for 1 I0519 21:14:20.583369 6 log.go:172] (0xc00229e6e0) (0xc001bf9860) Create stream I0519 21:14:20.583389 6 log.go:172] (0xc00229e6e0) (0xc001bf9860) Stream added, broadcasting: 3 I0519 21:14:20.584300 6 log.go:172] (0xc00229e6e0) Reply frame received for 3 I0519 21:14:20.584335 6 log.go:172] (0xc00229e6e0) (0xc0022e4500) Create stream I0519 21:14:20.584348 6 log.go:172] (0xc00229e6e0) (0xc0022e4500) Stream added, broadcasting: 5 I0519 21:14:20.585098 6 log.go:172] (0xc00229e6e0) Reply frame received for 5 I0519 21:14:20.658726 6 log.go:172] (0xc00229e6e0) Data frame received for 5 I0519 21:14:20.658768 6 log.go:172] (0xc0022e4500) (5) Data frame handling I0519 21:14:20.658789 6 log.go:172] (0xc00229e6e0) Data frame received for 3 I0519 21:14:20.658810 6 log.go:172] (0xc001bf9860) (3) Data frame handling I0519 21:14:20.658828 6 log.go:172] (0xc001bf9860) (3) Data frame sent I0519 21:14:20.658840 6 log.go:172] (0xc00229e6e0) Data frame received for 3 I0519 21:14:20.658847 6 log.go:172] (0xc001bf9860) (3) Data frame handling I0519 21:14:20.659764 6 log.go:172] (0xc00229e6e0) Data frame received for 1 I0519 21:14:20.659802 6 log.go:172] (0xc002366500) (1) Data frame handling I0519 21:14:20.659825 6 log.go:172] (0xc002366500) (1) Data frame sent I0519 21:14:20.659842 6 log.go:172] (0xc00229e6e0) (0xc002366500) Stream removed, broadcasting: 1 I0519 21:14:20.659861 6 log.go:172] (0xc00229e6e0) Go away received I0519 21:14:20.659965 6 log.go:172] (0xc00229e6e0) (0xc002366500) Stream removed, broadcasting: 1 I0519 21:14:20.659977 6 log.go:172] (0xc00229e6e0) (0xc001bf9860) Stream removed, broadcasting: 3 I0519 21:14:20.659982 6 log.go:172] (0xc00229e6e0) (0xc0022e4500) Stream removed, broadcasting: 5 May 19 21:14:20.659: INFO: Exec stderr: "" May 19 21:14:20.660: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.660: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.682361 6 log.go:172] (0xc0027ca630) (0xc00225c640) Create stream I0519 21:14:20.682391 6 log.go:172] (0xc0027ca630) (0xc00225c640) Stream added, broadcasting: 1 I0519 21:14:20.684681 6 log.go:172] (0xc0027ca630) Reply frame received for 1 I0519 21:14:20.684727 6 log.go:172] (0xc0027ca630) (0xc0022e45a0) Create stream I0519 21:14:20.684748 6 log.go:172] (0xc0027ca630) (0xc0022e45a0) Stream added, broadcasting: 3 I0519 21:14:20.685661 6 log.go:172] (0xc0027ca630) Reply frame received for 3 I0519 21:14:20.685699 6 log.go:172] (0xc0027ca630) (0xc0023665a0) Create stream I0519 21:14:20.685712 6 log.go:172] (0xc0027ca630) (0xc0023665a0) Stream added, broadcasting: 5 I0519 21:14:20.686535 6 log.go:172] (0xc0027ca630) Reply frame received for 5 I0519 21:14:20.750783 6 log.go:172] (0xc0027ca630) Data frame received for 5 I0519 21:14:20.750812 6 log.go:172] (0xc0023665a0) (5) Data frame handling I0519 21:14:20.750857 6 log.go:172] (0xc0027ca630) Data frame received for 3 I0519 21:14:20.750893 6 log.go:172] (0xc0022e45a0) (3) Data frame handling I0519 21:14:20.750918 6 log.go:172] (0xc0022e45a0) (3) Data frame sent I0519 21:14:20.750935 6 log.go:172] (0xc0027ca630) Data frame received for 3 I0519 21:14:20.750948 6 log.go:172] (0xc0022e45a0) (3) Data frame handling I0519 21:14:20.752415 6 log.go:172] (0xc0027ca630) Data frame received for 1 I0519 21:14:20.752440 6 log.go:172] (0xc00225c640) (1) Data frame handling I0519 21:14:20.752453 6 log.go:172] (0xc00225c640) (1) Data frame sent I0519 21:14:20.752477 6 log.go:172] (0xc0027ca630) (0xc00225c640) Stream removed, broadcasting: 1 I0519 21:14:20.752503 6 log.go:172] (0xc0027ca630) Go away received I0519 21:14:20.752568 6 log.go:172] (0xc0027ca630) (0xc00225c640) Stream removed, broadcasting: 1 I0519 21:14:20.752596 6 log.go:172] (0xc0027ca630) (0xc0022e45a0) Stream removed, broadcasting: 3 I0519 21:14:20.752607 6 log.go:172] (0xc0027ca630) (0xc0023665a0) Stream removed, broadcasting: 5 May 19 21:14:20.752: INFO: Exec stderr: "" May 19 21:14:20.752: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.752: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.782717 6 log.go:172] (0xc001948e70) (0xc001bf9cc0) Create stream I0519 21:14:20.782743 6 log.go:172] (0xc001948e70) (0xc001bf9cc0) Stream added, broadcasting: 1 I0519 21:14:20.784775 6 log.go:172] (0xc001948e70) Reply frame received for 1 I0519 21:14:20.784810 6 log.go:172] (0xc001948e70) (0xc0023b4b40) Create stream I0519 21:14:20.784818 6 log.go:172] (0xc001948e70) (0xc0023b4b40) Stream added, broadcasting: 3 I0519 21:14:20.785812 6 log.go:172] (0xc001948e70) Reply frame received for 3 I0519 21:14:20.785843 6 log.go:172] (0xc001948e70) (0xc00225c820) Create stream I0519 21:14:20.785849 6 log.go:172] (0xc001948e70) (0xc00225c820) Stream added, broadcasting: 5 I0519 21:14:20.786510 6 log.go:172] (0xc001948e70) Reply frame received for 5 I0519 21:14:20.856471 6 log.go:172] (0xc001948e70) Data frame received for 5 I0519 21:14:20.856530 6 log.go:172] (0xc001948e70) Data frame received for 3 I0519 21:14:20.856569 6 log.go:172] (0xc0023b4b40) (3) Data frame handling I0519 21:14:20.856580 6 log.go:172] (0xc0023b4b40) (3) Data frame sent I0519 21:14:20.856590 6 log.go:172] (0xc001948e70) Data frame received for 3 I0519 21:14:20.856603 6 log.go:172] (0xc0023b4b40) (3) Data frame handling I0519 21:14:20.856623 6 log.go:172] (0xc00225c820) (5) Data frame handling I0519 21:14:20.858125 6 log.go:172] (0xc001948e70) Data frame received for 1 I0519 21:14:20.858149 6 log.go:172] (0xc001bf9cc0) (1) Data frame handling I0519 21:14:20.858163 6 log.go:172] (0xc001bf9cc0) (1) Data frame sent I0519 21:14:20.858181 6 log.go:172] (0xc001948e70) (0xc001bf9cc0) Stream removed, broadcasting: 1 I0519 21:14:20.858206 6 log.go:172] (0xc001948e70) Go away received I0519 21:14:20.858341 6 log.go:172] (0xc001948e70) (0xc001bf9cc0) Stream removed, broadcasting: 1 I0519 21:14:20.858359 6 log.go:172] (0xc001948e70) (0xc0023b4b40) Stream removed, broadcasting: 3 I0519 21:14:20.858370 6 log.go:172] (0xc001948e70) (0xc00225c820) Stream removed, broadcasting: 5 May 19 21:14:20.858: INFO: Exec stderr: "" May 19 21:14:20.858: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2273 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:14:20.858: INFO: >>> kubeConfig: /root/.kube/config I0519 21:14:20.887491 6 log.go:172] (0xc001760b00) (0xc0023b4d20) Create stream I0519 21:14:20.887514 6 log.go:172] (0xc001760b00) (0xc0023b4d20) Stream added, broadcasting: 1 I0519 21:14:20.890645 6 log.go:172] (0xc001760b00) Reply frame received for 1 I0519 21:14:20.890667 6 log.go:172] (0xc001760b00) (0xc0023b4dc0) Create stream I0519 21:14:20.890689 6 log.go:172] (0xc001760b00) (0xc0023b4dc0) Stream added, broadcasting: 3 I0519 21:14:20.891721 6 log.go:172] (0xc001760b00) Reply frame received for 3 I0519 21:14:20.891767 6 log.go:172] (0xc001760b00) (0xc0023b4e60) Create stream I0519 21:14:20.891780 6 log.go:172] (0xc001760b00) (0xc0023b4e60) Stream added, broadcasting: 5 I0519 21:14:20.892899 6 log.go:172] (0xc001760b00) Reply frame received for 5 I0519 21:14:20.958402 6 log.go:172] (0xc001760b00) Data frame received for 3 I0519 21:14:20.958434 6 log.go:172] (0xc0023b4dc0) (3) Data frame handling I0519 21:14:20.958456 6 log.go:172] (0xc0023b4dc0) (3) Data frame sent I0519 21:14:20.958471 6 log.go:172] (0xc001760b00) Data frame received for 3 I0519 21:14:20.958486 6 log.go:172] (0xc0023b4dc0) (3) Data frame handling I0519 21:14:20.958512 6 log.go:172] (0xc001760b00) Data frame received for 5 I0519 21:14:20.958540 6 log.go:172] (0xc0023b4e60) (5) Data frame handling I0519 21:14:20.960118 6 log.go:172] (0xc001760b00) Data frame received for 1 I0519 21:14:20.960147 6 log.go:172] (0xc0023b4d20) (1) Data frame handling I0519 21:14:20.960157 6 log.go:172] (0xc0023b4d20) (1) Data frame sent I0519 21:14:20.960167 6 log.go:172] (0xc001760b00) (0xc0023b4d20) Stream removed, broadcasting: 1 I0519 21:14:20.960186 6 log.go:172] (0xc001760b00) Go away received I0519 21:14:20.960295 6 log.go:172] (0xc001760b00) (0xc0023b4d20) Stream removed, broadcasting: 1 I0519 21:14:20.960310 6 log.go:172] (0xc001760b00) (0xc0023b4dc0) Stream removed, broadcasting: 3 I0519 21:14:20.960316 6 log.go:172] (0xc001760b00) (0xc0023b4e60) Stream removed, broadcasting: 5 May 19 21:14:20.960: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:20.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2273" for this suite. • [SLOW TEST:14.001 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:20.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:14:21.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8" in namespace "projected-5843" to be "success or failure" May 19 21:14:21.159: INFO: Pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.248657ms May 19 21:14:23.203: INFO: Pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061196789s May 19 21:14:25.209: INFO: Pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8": Phase="Running", Reason="", readiness=true. Elapsed: 4.067593294s May 19 21:14:27.220: INFO: Pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078143781s STEP: Saw pod success May 19 21:14:27.220: INFO: Pod "downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8" satisfied condition "success or failure" May 19 21:14:27.222: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8 container client-container: STEP: delete the pod May 19 21:14:27.242: INFO: Waiting for pod downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8 to disappear May 19 21:14:27.246: INFO: Pod downwardapi-volume-a95861ea-1547-4110-81c5-158c922769c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:27.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5843" for this suite. • [SLOW TEST:6.285 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":290,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:27.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c38abcb4-204c-47b8-9516-08015b96af86 STEP: Creating a pod to test consume configMaps May 19 21:14:27.400: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3" in namespace "projected-7906" to be "success or failure" May 19 21:14:27.404: INFO: Pod "pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.942792ms May 19 21:14:29.408: INFO: Pod "pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007728499s May 19 21:14:31.411: INFO: Pod "pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011028123s STEP: Saw pod success May 19 21:14:31.411: INFO: Pod "pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3" satisfied condition "success or failure" May 19 21:14:31.413: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3 container projected-configmap-volume-test: STEP: delete the pod May 19 21:14:31.444: INFO: Waiting for pod pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3 to disappear May 19 21:14:31.452: INFO: Pod pod-projected-configmaps-d154b014-4f54-4dbc-8ec4-140686a672a3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:31.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7906" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":295,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:31.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 21:14:31.530: INFO: Waiting up to 5m0s for pod "pod-293920a6-e42d-44d3-b5db-d41566498d41" in namespace "emptydir-1595" to be "success or failure" May 19 21:14:31.536: INFO: Pod "pod-293920a6-e42d-44d3-b5db-d41566498d41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069913ms May 19 21:14:33.567: INFO: Pod "pod-293920a6-e42d-44d3-b5db-d41566498d41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037166432s May 19 21:14:35.579: INFO: Pod "pod-293920a6-e42d-44d3-b5db-d41566498d41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04955276s STEP: Saw pod success May 19 21:14:35.579: INFO: Pod "pod-293920a6-e42d-44d3-b5db-d41566498d41" satisfied condition "success or failure" May 19 21:14:35.591: INFO: Trying to get logs from node jerma-worker2 pod pod-293920a6-e42d-44d3-b5db-d41566498d41 container test-container: STEP: delete the pod May 19 21:14:35.735: INFO: Waiting for pod pod-293920a6-e42d-44d3-b5db-d41566498d41 to disappear May 19 21:14:35.824: INFO: Pod pod-293920a6-e42d-44d3-b5db-d41566498d41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:35.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1595" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:35.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:14:35.957: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 19 21:14:38.000: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:39.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2242" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":15,"skipped":390,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:39.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:14:39.430: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 21:14:43.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9669 create -f -' May 19 21:14:49.067: INFO: stderr: "" May 19 21:14:49.067: INFO: stdout: "e2e-test-crd-publish-openapi-7436-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 19 21:14:49.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9669 delete e2e-test-crd-publish-openapi-7436-crds test-cr' May 19 21:14:49.176: INFO: stderr: "" May 19 21:14:49.176: INFO: stdout: "e2e-test-crd-publish-openapi-7436-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 19 21:14:49.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9669 apply -f -' May 19 21:14:50.095: INFO: stderr: "" May 19 21:14:50.095: INFO: stdout: "e2e-test-crd-publish-openapi-7436-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 19 21:14:50.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9669 delete e2e-test-crd-publish-openapi-7436-crds test-cr' May 19 21:14:50.205: INFO: stderr: "" May 19 21:14:50.205: INFO: stdout: "e2e-test-crd-publish-openapi-7436-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 19 21:14:50.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7436-crds' May 19 21:14:50.469: INFO: stderr: "" May 19 21:14:50.469: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7436-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:14:53.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9669" for this suite. • [SLOW TEST:14.074 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":16,"skipped":401,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:14:53.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3968.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3968.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3968.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3968.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:15:01.554: INFO: DNS probes using dns-3968/dns-test-25c7b277-91cb-4651-9c46-ca732702ae20 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:01.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3968" for this suite. • [SLOW TEST:8.258 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":17,"skipped":401,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:01.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 19 21:15:06.239: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-7f3513a3-0b80-44c5-8615-4a623b09ac6d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 19 21:15:06.464: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-7f3513a3-0b80-44c5-8615-4a623b09ac6d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 19 21:15:06.660: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-7f3513a3-0b80-44c5-8615-4a623b09ac6d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:06.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7780" for this suite. • [SLOW TEST:5.231 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":18,"skipped":407,"failed":0} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:06.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:12.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5928" for this suite. • [SLOW TEST:5.511 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":19,"skipped":407,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:12.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:15:12.450: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 21:15:15.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6647 create -f -' May 19 21:15:19.438: INFO: stderr: "" May 19 21:15:19.438: INFO: stdout: "e2e-test-crd-publish-openapi-3191-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 21:15:19.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6647 delete e2e-test-crd-publish-openapi-3191-crds test-cr' May 19 21:15:19.547: INFO: stderr: "" May 19 21:15:19.547: INFO: stdout: "e2e-test-crd-publish-openapi-3191-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 19 21:15:19.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6647 apply -f -' May 19 21:15:20.351: INFO: stderr: "" May 19 21:15:20.351: INFO: stdout: "e2e-test-crd-publish-openapi-3191-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 21:15:20.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6647 delete e2e-test-crd-publish-openapi-3191-crds test-cr' May 19 21:15:20.461: INFO: stderr: "" May 19 21:15:20.461: INFO: stdout: "e2e-test-crd-publish-openapi-3191-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 19 21:15:20.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3191-crds' May 19 21:15:21.234: INFO: stderr: "" May 19 21:15:21.234: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3191-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:23.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6647" for this suite. • [SLOW TEST:10.781 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":20,"skipped":410,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:23.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 19 21:15:23.271: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:28.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-102" for this suite. • [SLOW TEST:5.524 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":21,"skipped":421,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:28.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:29.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8837" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":22,"skipped":427,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:29.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4354 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4354 STEP: Creating statefulset with conflicting port in namespace statefulset-4354 STEP: Waiting until pod test-pod will start running in namespace statefulset-4354 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4354 May 19 21:15:33.211: INFO: Observed stateful pod in namespace: statefulset-4354, name: ss-0, uid: 1e374e51-5d9f-4a3b-a8e2-2cb09aa77163, status phase: Pending. Waiting for statefulset controller to delete. May 19 21:15:33.820: INFO: Observed stateful pod in namespace: statefulset-4354, name: ss-0, uid: 1e374e51-5d9f-4a3b-a8e2-2cb09aa77163, status phase: Failed. Waiting for statefulset controller to delete. May 19 21:15:33.850: INFO: Observed stateful pod in namespace: statefulset-4354, name: ss-0, uid: 1e374e51-5d9f-4a3b-a8e2-2cb09aa77163, status phase: Failed. Waiting for statefulset controller to delete. May 19 21:15:33.895: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4354 STEP: Removing pod with conflicting port in namespace statefulset-4354 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4354 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 21:15:38.092: INFO: Deleting all statefulset in ns statefulset-4354 May 19 21:15:38.095: INFO: Scaling statefulset ss to 0 May 19 21:15:58.115: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:15:58.118: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:15:58.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4354" for this suite. • [SLOW TEST:29.116 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":23,"skipped":427,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:15:58.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:15:58.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:16:00.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519758, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519758, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519759, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519758, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:16:03.983: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:16:03.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-257-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:05.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7525" for this suite. STEP: Destroying namespace "webhook-7525-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.138 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":24,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:05.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f1b492b4-ee15-4870-89f7-b94807a82481 STEP: Creating a pod to test consume configMaps May 19 21:16:05.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b" in namespace "projected-3053" to be "success or failure" May 19 21:16:05.435: INFO: Pod "pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511775ms May 19 21:16:07.479: INFO: Pod "pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047299134s May 19 21:16:09.482: INFO: Pod "pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050668916s STEP: Saw pod success May 19 21:16:09.482: INFO: Pod "pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b" satisfied condition "success or failure" May 19 21:16:09.485: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b container projected-configmap-volume-test: STEP: delete the pod May 19 21:16:09.597: INFO: Waiting for pod pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b to disappear May 19 21:16:09.622: INFO: Pod pod-projected-configmaps-8e5b6a36-b4ff-4e0b-a57a-d184fd14622b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:09.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3053" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":466,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:09.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-1213/secret-test-3c93369f-99dd-415b-a0a0-45014f9e9fa3 STEP: Creating a pod to test consume secrets May 19 21:16:09.743: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14" in namespace "secrets-1213" to be "success or failure" May 19 21:16:09.747: INFO: Pod "pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172078ms May 19 21:16:11.775: INFO: Pod "pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031974285s May 19 21:16:13.779: INFO: Pod "pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035714878s STEP: Saw pod success May 19 21:16:13.779: INFO: Pod "pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14" satisfied condition "success or failure" May 19 21:16:13.782: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14 container env-test: STEP: delete the pod May 19 21:16:13.805: INFO: Waiting for pod pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14 to disappear May 19 21:16:13.810: INFO: Pod pod-configmaps-f1c502ca-abc1-4ea8-b68c-da07770bfa14 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:13.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1213" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":478,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:13.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-cc4bb85c-53d4-498b-a7dc-63a47f536e80 STEP: Creating a pod to test consume configMaps May 19 21:16:13.938: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8" in namespace "projected-4462" to be "success or failure" May 19 21:16:14.109: INFO: Pod "pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 171.365551ms May 19 21:16:16.113: INFO: Pod "pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175671952s May 19 21:16:18.118: INFO: Pod "pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1799818s STEP: Saw pod success May 19 21:16:18.118: INFO: Pod "pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8" satisfied condition "success or failure" May 19 21:16:18.122: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8 container projected-configmap-volume-test: STEP: delete the pod May 19 21:16:18.165: INFO: Waiting for pod pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8 to disappear May 19 21:16:18.181: INFO: Pod pod-projected-configmaps-6b4fad80-8531-4697-9557-50ac62c5e9d8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4462" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":483,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:18.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:31.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-75" for this suite. • [SLOW TEST:13.236 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":28,"skipped":505,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:31.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 19 21:16:31.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-256' May 19 21:16:31.839: INFO: stderr: "" May 19 21:16:31.839: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:16:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-256' May 19 21:16:31.967: INFO: stderr: "" May 19 21:16:31.967: INFO: stdout: "update-demo-nautilus-5gt5z update-demo-nautilus-6cwxn " May 19 21:16:31.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gt5z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-256' May 19 21:16:32.069: INFO: stderr: "" May 19 21:16:32.069: INFO: stdout: "" May 19 21:16:32.069: INFO: update-demo-nautilus-5gt5z is created but not running May 19 21:16:37.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-256' May 19 21:16:37.171: INFO: stderr: "" May 19 21:16:37.171: INFO: stdout: "update-demo-nautilus-5gt5z update-demo-nautilus-6cwxn " May 19 21:16:37.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gt5z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-256' May 19 21:16:37.260: INFO: stderr: "" May 19 21:16:37.260: INFO: stdout: "true" May 19 21:16:37.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5gt5z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-256' May 19 21:16:37.389: INFO: stderr: "" May 19 21:16:37.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:16:37.389: INFO: validating pod update-demo-nautilus-5gt5z May 19 21:16:37.431: INFO: got data: { "image": "nautilus.jpg" } May 19 21:16:37.431: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:16:37.431: INFO: update-demo-nautilus-5gt5z is verified up and running May 19 21:16:37.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cwxn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-256' May 19 21:16:37.528: INFO: stderr: "" May 19 21:16:37.528: INFO: stdout: "true" May 19 21:16:37.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6cwxn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-256' May 19 21:16:37.625: INFO: stderr: "" May 19 21:16:37.625: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:16:37.625: INFO: validating pod update-demo-nautilus-6cwxn May 19 21:16:37.634: INFO: got data: { "image": "nautilus.jpg" } May 19 21:16:37.634: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:16:37.634: INFO: update-demo-nautilus-6cwxn is verified up and running STEP: using delete to clean up resources May 19 21:16:37.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-256' May 19 21:16:37.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:16:37.750: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 21:16:37.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-256' May 19 21:16:37.847: INFO: stderr: "No resources found in kubectl-256 namespace.\n" May 19 21:16:37.847: INFO: stdout: "" May 19 21:16:37.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-256 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 21:16:37.944: INFO: stderr: "" May 19 21:16:37.944: INFO: stdout: "update-demo-nautilus-5gt5z\nupdate-demo-nautilus-6cwxn\n" May 19 21:16:38.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-256' May 19 21:16:38.547: INFO: stderr: "No resources found in kubectl-256 namespace.\n" May 19 21:16:38.547: INFO: stdout: "" May 19 21:16:38.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-256 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 21:16:38.637: INFO: stderr: "" May 19 21:16:38.637: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:38.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-256" for this suite. • [SLOW TEST:7.218 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":29,"skipped":508,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:38.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:16:38.881: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:40.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4895" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":30,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:40.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:16:40.215: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-91435283-410d-4880-a351-a8bf2fdda428" in namespace "security-context-test-8580" to be "success or failure" May 19 21:16:40.218: INFO: Pod "alpine-nnp-false-91435283-410d-4880-a351-a8bf2fdda428": Phase="Pending", Reason="", readiness=false. Elapsed: 3.24856ms May 19 21:16:42.222: INFO: Pod "alpine-nnp-false-91435283-410d-4880-a351-a8bf2fdda428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007474708s May 19 21:16:44.227: INFO: Pod "alpine-nnp-false-91435283-410d-4880-a351-a8bf2fdda428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012038417s May 19 21:16:44.227: INFO: Pod "alpine-nnp-false-91435283-410d-4880-a351-a8bf2fdda428" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:44.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8580" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:44.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:16:44.954: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:16:46.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519804, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519804, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519805, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519804, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:16:50.043: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:50.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9739" for this suite. STEP: Destroying namespace "webhook-9739-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.016 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":32,"skipped":554,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:50.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:16:50.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b" in namespace "downward-api-1995" to be "success or failure" May 19 21:16:50.446: INFO: Pod "downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090221ms May 19 21:16:52.450: INFO: Pod "downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009922033s May 19 21:16:54.454: INFO: Pod "downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014171494s STEP: Saw pod success May 19 21:16:54.455: INFO: Pod "downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b" satisfied condition "success or failure" May 19 21:16:54.458: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b container client-container: STEP: delete the pod May 19 21:16:54.524: INFO: Waiting for pod downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b to disappear May 19 21:16:54.536: INFO: Pod downwardapi-volume-935c6ac2-3831-401e-81ec-6dc2e61c831b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:16:54.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1995" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":569,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:16:54.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 19 21:16:54.583: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 19 21:16:55.145: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 19 21:16:57.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:16:59.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519815, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:01.996: INFO: Waited 625.3739ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:17:02.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6373" for this suite. • [SLOW TEST:8.136 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":34,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:17:02.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:17:03.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5452' May 19 21:17:03.111: INFO: stderr: "" May 19 21:17:03.111: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 19 21:17:08.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5452 -o json' May 19 21:17:08.251: INFO: stderr: "" May 19 21:17:08.251: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-19T21:17:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5452\",\n \"resourceVersion\": \"17525278\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5452/pods/e2e-test-httpd-pod\",\n \"uid\": \"c377d6c5-3a71-4a8f-9844-474c0d668e16\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-r7pfv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-r7pfv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-r7pfv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T21:17:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T21:17:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T21:17:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T21:17:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b465be2ea1bb6c40dcd27d2c46995699c4cb1ed210136246771f8f73ba7d5b43\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-19T21:17:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.233\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.233\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-19T21:17:03Z\"\n }\n}\n" STEP: replace the image in the pod May 19 21:17:08.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5452' May 19 21:17:09.115: INFO: stderr: "" May 19 21:17:09.115: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 19 21:17:09.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5452' May 19 21:17:19.481: INFO: stderr: "" May 19 21:17:19.481: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:17:19.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5452" for this suite. • [SLOW TEST:16.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":35,"skipped":627,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:17:19.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 19 21:17:19.597: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525352 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 21:17:19.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525353 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 21:17:19.598: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525354 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 19 21:17:29.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525396 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 21:17:29.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525397 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 19 21:17:29.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1308 /api/v1/namespaces/watch-1308/configmaps/e2e-watch-test-label-changed fdd84a13-891c-47e2-8a13-a49f0e2f15c0 17525398 0 2020-05-19 21:17:19 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:17:29.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1308" for this suite. • [SLOW TEST:10.162 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":36,"skipped":631,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:17:29.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:17:29.780: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 19 21:17:34.792: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 21:17:34.792: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 19 21:17:36.796: INFO: Creating deployment "test-rollover-deployment" May 19 21:17:36.807: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 19 21:17:38.814: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 19 21:17:38.820: INFO: Ensure that both replica sets have 1 created replica May 19 21:17:38.826: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 19 21:17:38.832: INFO: Updating deployment test-rollover-deployment May 19 21:17:38.832: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 19 21:17:40.858: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 19 21:17:40.863: INFO: Make sure deployment "test-rollover-deployment" is complete May 19 21:17:40.868: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:40.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519859, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:42.876: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:42.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:44.876: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:44.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:46.875: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:46.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:48.876: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:48.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:50.876: INFO: all replica sets need to contain the pod-template-hash label May 19 21:17:50.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725519856, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:17:52.874: INFO: May 19 21:17:52.874: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 19 21:17:52.879: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-8585 /apis/apps/v1/namespaces/deployment-8585/deployments/test-rollover-deployment 9ae867ff-aae4-4a53-8f90-091ccf71c448 17525544 2 2020-05-19 21:17:36 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f0088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 21:17:36 +0000 UTC,LastTransitionTime:2020-05-19 21:17:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-19 21:17:52 +0000 UTC,LastTransitionTime:2020-05-19 21:17:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 21:17:52.882: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-8585 /apis/apps/v1/namespaces/deployment-8585/replicasets/test-rollover-deployment-574d6dfbff 0dcf4270-c243-46f8-a9b7-03bd2aa7bab9 17525533 2 2020-05-19 21:17:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9ae867ff-aae4-4a53-8f90-091ccf71c448 0xc0025f04e7 0xc0025f04e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f0558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 21:17:52.882: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 19 21:17:52.882: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-8585 /apis/apps/v1/namespaces/deployment-8585/replicasets/test-rollover-controller a59c0d9a-3757-400c-9a13-8b7735daee76 17525542 2 2020-05-19 21:17:29 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9ae867ff-aae4-4a53-8f90-091ccf71c448 0xc0025f03ff 0xc0025f0410}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0025f0478 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 21:17:52.882: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-8585 /apis/apps/v1/namespaces/deployment-8585/replicasets/test-rollover-deployment-f6c94f66c 4206addf-c037-486c-a28f-fffdd45dd9a6 17525485 2 2020-05-19 21:17:36 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9ae867ff-aae4-4a53-8f90-091ccf71c448 0xc0025f05c0 0xc0025f05c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f0638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 21:17:52.884: INFO: Pod "test-rollover-deployment-574d6dfbff-zm9qm" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-zm9qm test-rollover-deployment-574d6dfbff- deployment-8585 /api/v1/namespaces/deployment-8585/pods/test-rollover-deployment-574d6dfbff-zm9qm 31d517f1-7b24-4beb-bb7d-38c696eb1197 17525501 0 2020-05-19 21:17:38 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 0dcf4270-c243-46f8-a9b7-03bd2aa7bab9 0xc0025f0b77 0xc0025f0b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-czmjn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-czmjn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-czmjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:17:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:17:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:17:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.188,StartTime:2020-05-19 21:17:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 21:17:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://89cb823d5438649ecbd5fc709a6915861a8afe1deeb8c11e35203d92832ecfdf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:17:52.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8585" for this suite. • [SLOW TEST:23.241 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":37,"skipped":637,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:17:52.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-1490 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1490 STEP: Deleting pre-stop pod May 19 21:18:06.244: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:18:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1490" for this suite. • [SLOW TEST:13.410 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":38,"skipped":642,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:18:06.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:18:06.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5604' May 19 21:18:07.952: INFO: stderr: "" May 19 21:18:07.952: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 19 21:18:07.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5604' May 19 21:18:09.217: INFO: stderr: "" May 19 21:18:09.217: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 21:18:10.321: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:10.321: INFO: Found 0 / 1 May 19 21:18:11.230: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:11.230: INFO: Found 0 / 1 May 19 21:18:12.222: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:12.222: INFO: Found 1 / 1 May 19 21:18:12.222: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 21:18:12.225: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:12.225: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 21:18:12.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-cwnhg --namespace=kubectl-5604' May 19 21:18:12.362: INFO: stderr: "" May 19 21:18:12.362: INFO: stdout: "Name: agnhost-master-cwnhg\nNamespace: kubectl-5604\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Tue, 19 May 2020 21:18:08 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.236\nIPs:\n IP: 10.244.2.236\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2766fa13924e95dfcba64892aa38d6e604b4cbd7046674e330aea2d2e9fe6d33\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 May 2020 21:18:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-mdwtl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-mdwtl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-mdwtl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5604/agnhost-master-cwnhg to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" May 19 21:18:12.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5604' May 19 21:18:12.496: INFO: stderr: "" May 19 21:18:12.496: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5604\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-cwnhg\n" May 19 21:18:12.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5604' May 19 21:18:12.596: INFO: stderr: "" May 19 21:18:12.596: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5604\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.98.225.211\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.236:6379\nSession Affinity: None\nEvents: \n" May 19 21:18:12.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 19 21:18:12.729: INFO: stderr: "" May 19 21:18:12.729: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 19 May 2020 21:18:06 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 19 May 2020 21:14:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 May 2020 21:14:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 May 2020 21:14:23 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 May 2020 21:14:23 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 65d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 65d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 65d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 19 21:18:12.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5604' May 19 21:18:12.842: INFO: stderr: "" May 19 21:18:12.842: INFO: stdout: "Name: kubectl-5604\nLabels: e2e-framework=kubectl\n e2e-run=35f88040-4317-4d4c-8836-ccfcd5f0b04b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:18:12.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5604" for this suite. • [SLOW TEST:6.548 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":39,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:18:12.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 19 21:18:12.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9346' May 19 21:18:14.256: INFO: stderr: "" May 19 21:18:14.256: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 21:18:15.259: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:15.260: INFO: Found 0 / 1 May 19 21:18:16.260: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:16.260: INFO: Found 0 / 1 May 19 21:18:17.260: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:17.260: INFO: Found 0 / 1 May 19 21:18:18.302: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:18.302: INFO: Found 1 / 1 May 19 21:18:18.302: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 19 21:18:18.306: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:18.306: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 21:18:18.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-ngxmk --namespace=kubectl-9346 -p {"metadata":{"annotations":{"x":"y"}}}' May 19 21:18:18.407: INFO: stderr: "" May 19 21:18:18.407: INFO: stdout: "pod/agnhost-master-ngxmk patched\n" STEP: checking annotations May 19 21:18:18.495: INFO: Selector matched 1 pods for map[app:agnhost] May 19 21:18:18.495: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:18:18.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9346" for this suite. • [SLOW TEST:5.758 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":40,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:18:18.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 19 21:18:18.741: INFO: Waiting up to 5m0s for pod "client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98" in namespace "containers-350" to be "success or failure" May 19 21:18:18.955: INFO: Pod "client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98": Phase="Pending", Reason="", readiness=false. Elapsed: 213.872293ms May 19 21:18:20.977: INFO: Pod "client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235866713s May 19 21:18:22.982: INFO: Pod "client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.24060447s STEP: Saw pod success May 19 21:18:22.982: INFO: Pod "client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98" satisfied condition "success or failure" May 19 21:18:22.985: INFO: Trying to get logs from node jerma-worker pod client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98 container test-container: STEP: delete the pod May 19 21:18:23.020: INFO: Waiting for pod client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98 to disappear May 19 21:18:23.038: INFO: Pod client-containers-5b5eb4cf-f3e4-456e-9f8b-88d242f26c98 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:18:23.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-350" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":754,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:18:23.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-lz47 STEP: Creating a pod to test atomic-volume-subpath May 19 21:18:23.140: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lz47" in namespace "subpath-3893" to be "success or failure" May 19 21:18:23.211: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Pending", Reason="", readiness=false. Elapsed: 71.014622ms May 19 21:18:25.216: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075828744s May 19 21:18:27.220: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 4.079795084s May 19 21:18:29.223: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 6.083501302s May 19 21:18:31.228: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 8.087819113s May 19 21:18:33.231: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 10.090894662s May 19 21:18:35.235: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 12.095252517s May 19 21:18:37.240: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 14.09956174s May 19 21:18:39.243: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 16.103505043s May 19 21:18:41.247: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 18.107505943s May 19 21:18:43.251: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 20.11124644s May 19 21:18:45.260: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Running", Reason="", readiness=true. Elapsed: 22.120087909s May 19 21:18:47.264: INFO: Pod "pod-subpath-test-projected-lz47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124444105s STEP: Saw pod success May 19 21:18:47.264: INFO: Pod "pod-subpath-test-projected-lz47" satisfied condition "success or failure" May 19 21:18:47.268: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-lz47 container test-container-subpath-projected-lz47: STEP: delete the pod May 19 21:18:47.395: INFO: Waiting for pod pod-subpath-test-projected-lz47 to disappear May 19 21:18:47.411: INFO: Pod pod-subpath-test-projected-lz47 no longer exists STEP: Deleting pod pod-subpath-test-projected-lz47 May 19 21:18:47.411: INFO: Deleting pod "pod-subpath-test-projected-lz47" in namespace "subpath-3893" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:18:47.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3893" for this suite. • [SLOW TEST:24.375 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":42,"skipped":759,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:18:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-69dbx in namespace proxy-8234 I0519 21:18:47.558319 6 runners.go:189] Created replication controller with name: proxy-service-69dbx, namespace: proxy-8234, replica count: 1 I0519 21:18:48.608720 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:18:49.608976 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:18:50.609365 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 21:18:51.609563 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 21:18:52.609778 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 21:18:53.609978 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 21:18:54.610240 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 21:18:55.610473 6 runners.go:189] proxy-service-69dbx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 21:18:55.614: INFO: setup took 8.152646485s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 19 21:18:55.625: INFO: (0) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 10.131709ms) May 19 21:18:55.625: INFO: (0) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 10.096828ms) May 19 21:18:55.625: INFO: (0) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 10.121279ms) May 19 21:18:55.625: INFO: (0) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 10.359158ms) May 19 21:18:55.626: INFO: (0) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 11.957402ms) May 19 21:18:55.627: INFO: (0) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 11.882847ms) May 19 21:18:55.627: INFO: (0) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 12.226144ms) May 19 21:18:55.628: INFO: (0) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 13.376304ms) May 19 21:18:55.629: INFO: (0) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 14.439538ms) May 19 21:18:55.630: INFO: (0) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 15.312299ms) May 19 21:18:55.630: INFO: (0) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 15.733793ms) May 19 21:18:55.637: INFO: (0) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 22.802376ms) May 19 21:18:55.652: INFO: (0) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 37.094396ms) May 19 21:18:55.652: INFO: (0) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 6.565788ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 6.531955ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 6.641116ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.705453ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 6.668452ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 7.040535ms) May 19 21:18:55.659: INFO: (1) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 7.061386ms) May 19 21:18:55.660: INFO: (1) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 7.626391ms) May 19 21:18:55.660: INFO: (1) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 7.58862ms) May 19 21:18:55.661: INFO: (1) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 8.702551ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.527138ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 4.448276ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 4.403863ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 4.521938ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.537268ms) May 19 21:18:55.666: INFO: (2) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.501001ms) May 19 21:18:55.667: INFO: (2) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 4.581731ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.626087ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.882013ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 4.950629ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 5.15313ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 5.189344ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 5.167899ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 5.293872ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.469086ms) May 19 21:18:55.672: INFO: (3) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 3.026656ms) May 19 21:18:55.676: INFO: (4) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 3.190204ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.00468ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.107044ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 4.250728ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.290708ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.252119ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 4.271434ms) May 19 21:18:55.677: INFO: (4) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 4.843945ms) May 19 21:18:55.684: INFO: (5) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test (200; 4.914994ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.835054ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 5.901103ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 6.007052ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 5.97152ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 6.048329ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 6.070515ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 6.13697ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 6.067793ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.149177ms) May 19 21:18:55.685: INFO: (5) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 6.118846ms) May 19 21:18:55.686: INFO: (5) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 6.497278ms) May 19 21:18:55.690: INFO: (6) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.857974ms) May 19 21:18:55.690: INFO: (6) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 3.811823ms) May 19 21:18:55.690: INFO: (6) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 4.034188ms) May 19 21:18:55.690: INFO: (6) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test (200; 4.920977ms) May 19 21:18:55.691: INFO: (6) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.996307ms) May 19 21:18:55.691: INFO: (6) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 5.048886ms) May 19 21:18:55.691: INFO: (6) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 5.123672ms) May 19 21:18:55.691: INFO: (6) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.432233ms) May 19 21:18:55.691: INFO: (6) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 5.639904ms) May 19 21:18:55.692: INFO: (6) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 6.003108ms) May 19 21:18:55.692: INFO: (6) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.21403ms) May 19 21:18:55.692: INFO: (6) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 6.23922ms) May 19 21:18:55.692: INFO: (6) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 6.211875ms) May 19 21:18:55.697: INFO: (7) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 5.114248ms) May 19 21:18:55.697: INFO: (7) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 5.229531ms) May 19 21:18:55.697: INFO: (7) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 5.177883ms) May 19 21:18:55.697: INFO: (7) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 5.282815ms) May 19 21:18:55.697: INFO: (7) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 5.256674ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test (200; 6.128199ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 6.096347ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.175266ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 6.125643ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 6.100383ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 6.137582ms) May 19 21:18:55.698: INFO: (7) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 6.228932ms) May 19 21:18:55.699: INFO: (7) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 6.507895ms) May 19 21:18:55.699: INFO: (7) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 6.499791ms) May 19 21:18:55.699: INFO: (7) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 6.507726ms) May 19 21:18:55.701: INFO: (8) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.518962ms) May 19 21:18:55.701: INFO: (8) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 2.623805ms) May 19 21:18:55.702: INFO: (8) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 3.116527ms) May 19 21:18:55.702: INFO: (8) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 3.127628ms) May 19 21:18:55.702: INFO: (8) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test<... (200; 3.435928ms) May 19 21:18:55.703: INFO: (8) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.082191ms) May 19 21:18:55.703: INFO: (8) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.348061ms) May 19 21:18:55.704: INFO: (8) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.799065ms) May 19 21:18:55.704: INFO: (8) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 5.304865ms) May 19 21:18:55.704: INFO: (8) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.549853ms) May 19 21:18:55.704: INFO: (8) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 5.532886ms) May 19 21:18:55.704: INFO: (8) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 5.745625ms) May 19 21:18:55.705: INFO: (8) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.529965ms) May 19 21:18:55.705: INFO: (8) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 6.613009ms) May 19 21:18:55.705: INFO: (8) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 6.689956ms) May 19 21:18:55.708: INFO: (9) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.746588ms) May 19 21:18:55.709: INFO: (9) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 3.733527ms) May 19 21:18:55.709: INFO: (9) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 3.793922ms) May 19 21:18:55.709: INFO: (9) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 3.867712ms) May 19 21:18:55.714: INFO: (9) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 8.07681ms) May 19 21:18:55.714: INFO: (9) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 8.676025ms) May 19 21:18:55.714: INFO: (9) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 8.80293ms) May 19 21:18:55.715: INFO: (9) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 9.100798ms) May 19 21:18:55.715: INFO: (9) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 9.11307ms) May 19 21:18:55.715: INFO: (9) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 9.04724ms) May 19 21:18:55.715: INFO: (9) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 9.431476ms) May 19 21:18:55.718: INFO: (9) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 12.444501ms) May 19 21:18:55.718: INFO: (9) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 12.534761ms) May 19 21:18:55.718: INFO: (9) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 12.696447ms) May 19 21:18:55.718: INFO: (9) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 12.655111ms) May 19 21:18:55.721: INFO: (10) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 2.739878ms) May 19 21:18:55.721: INFO: (10) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.852778ms) May 19 21:18:55.722: INFO: (10) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 4.183839ms) May 19 21:18:55.723: INFO: (10) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 4.246741ms) May 19 21:18:55.723: INFO: (10) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.404258ms) May 19 21:18:55.723: INFO: (10) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.487411ms) May 19 21:18:55.723: INFO: (10) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.411501ms) May 19 21:18:55.723: INFO: (10) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.570848ms) May 19 21:18:55.724: INFO: (10) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.265321ms) May 19 21:18:55.724: INFO: (10) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 5.383911ms) May 19 21:18:55.724: INFO: (10) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 5.402437ms) May 19 21:18:55.724: INFO: (10) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 5.404047ms) May 19 21:18:55.724: INFO: (10) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 5.462895ms) May 19 21:18:55.727: INFO: (11) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.893134ms) May 19 21:18:55.727: INFO: (11) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 2.979542ms) May 19 21:18:55.727: INFO: (11) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 3.033323ms) May 19 21:18:55.727: INFO: (11) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.070414ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 3.630449ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 3.77078ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 3.846262ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 3.779925ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 3.849274ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 4.324298ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.467373ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 4.436099ms) May 19 21:18:55.728: INFO: (11) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 4.401985ms) May 19 21:18:55.729: INFO: (11) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 4.995855ms) May 19 21:18:55.737: INFO: (12) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 8.276786ms) May 19 21:18:55.737: INFO: (12) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 8.122059ms) May 19 21:18:55.737: INFO: (12) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 8.185185ms) May 19 21:18:55.737: INFO: (12) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 8.398621ms) May 19 21:18:55.737: INFO: (12) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 8.386915ms) May 19 21:18:55.738: INFO: (12) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 8.405072ms) May 19 21:18:55.738: INFO: (12) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 8.499315ms) May 19 21:18:55.738: INFO: (12) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 8.449706ms) May 19 21:18:55.738: INFO: (12) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 4.225574ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.08791ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 4.168221ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 3.947118ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.169872ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 4.210767ms) May 19 21:18:55.742: INFO: (13) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.338687ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 5.628228ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 5.754828ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 5.697451ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 5.837047ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 5.703515ms) May 19 21:18:55.744: INFO: (13) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 6.116927ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 6.832196ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 6.862027ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 6.838051ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 6.908818ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 6.947675ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 6.9538ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 7.033192ms) May 19 21:18:55.751: INFO: (14) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 7.053591ms) May 19 21:18:55.752: INFO: (14) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 7.37556ms) May 19 21:18:55.752: INFO: (14) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 8.065133ms) May 19 21:18:55.753: INFO: (14) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 8.627768ms) May 19 21:18:55.753: INFO: (14) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 8.671128ms) May 19 21:18:55.754: INFO: (14) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 9.788155ms) May 19 21:18:55.754: INFO: (14) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 9.794873ms) May 19 21:18:55.754: INFO: (14) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 9.74361ms) May 19 21:18:55.754: INFO: (14) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test<... (200; 7.005724ms) May 19 21:18:55.762: INFO: (15) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 7.742343ms) May 19 21:18:55.763: INFO: (15) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 7.488102ms) May 19 21:18:55.763: INFO: (15) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test (200; 7.512537ms) May 19 21:18:55.763: INFO: (15) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 7.948502ms) May 19 21:18:55.763: INFO: (15) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 8.964228ms) May 19 21:18:55.763: INFO: (15) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 8.315702ms) May 19 21:18:55.764: INFO: (15) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 8.387139ms) May 19 21:18:55.764: INFO: (15) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 8.342161ms) May 19 21:18:55.764: INFO: (15) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 9.419069ms) May 19 21:18:55.764: INFO: (15) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 9.340059ms) May 19 21:18:55.764: INFO: (15) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 9.878536ms) May 19 21:18:55.767: INFO: (16) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.973164ms) May 19 21:18:55.767: INFO: (16) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.021099ms) May 19 21:18:55.767: INFO: (16) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 3.060225ms) May 19 21:18:55.767: INFO: (16) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 3.047219ms) May 19 21:18:55.768: INFO: (16) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.585715ms) May 19 21:18:55.768: INFO: (16) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 3.707685ms) May 19 21:18:55.768: INFO: (16) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 3.772387ms) May 19 21:18:55.769: INFO: (16) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test (200; 2.346034ms) May 19 21:18:55.773: INFO: (17) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 2.448411ms) May 19 21:18:55.773: INFO: (17) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.507899ms) May 19 21:18:55.773: INFO: (17) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 2.514477ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 3.301232ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 3.337922ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.354673ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 3.479712ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 3.488509ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 3.536212ms) May 19 21:18:55.774: INFO: (17) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 3.619555ms) May 19 21:18:55.775: INFO: (17) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 4.091597ms) May 19 21:18:55.775: INFO: (17) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 4.007974ms) May 19 21:18:55.775: INFO: (17) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: ... (200; 2.190413ms) May 19 21:18:55.777: INFO: (18) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 2.323931ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 3.574463ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: test<... (200; 3.748604ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 3.686161ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname2/proxy/: bar (200; 3.939387ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 4.160983ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname2/proxy/: tls qux (200; 4.352419ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/https:proxy-service-69dbx:tlsportname1/proxy/: tls baz (200; 4.426861ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/proxy-service-69dbx:portname1/proxy/: foo (200; 4.419685ms) May 19 21:18:55.779: INFO: (18) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 4.406213ms) May 19 21:18:55.783: INFO: (19) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:460/proxy/: tls baz (200; 3.997817ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:1080/proxy/: ... (200; 4.172039ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:1080/proxy/: test<... (200; 4.24704ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:160/proxy/: foo (200; 4.216134ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk/proxy/: test (200; 4.172343ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/http:proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.196498ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:462/proxy/: tls qux (200; 4.221911ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/proxy-service-69dbx-mk5dk:162/proxy/: bar (200; 4.547424ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname1/proxy/: foo (200; 4.63352ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/services/http:proxy-service-69dbx:portname2/proxy/: bar (200; 4.694231ms) May 19 21:18:55.784: INFO: (19) /api/v1/namespaces/proxy-8234/pods/https:proxy-service-69dbx-mk5dk:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:19:09.363: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 19 21:19:09.376: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:09.381: INFO: Number of nodes with available pods: 0 May 19 21:19:09.381: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:10.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:10.389: INFO: Number of nodes with available pods: 0 May 19 21:19:10.390: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:11.412: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:11.416: INFO: Number of nodes with available pods: 0 May 19 21:19:11.416: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:12.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:12.391: INFO: Number of nodes with available pods: 0 May 19 21:19:12.391: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:13.386: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:13.390: INFO: Number of nodes with available pods: 2 May 19 21:19:13.390: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 19 21:19:13.434: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:13.434: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:13.460: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:14.470: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:14.470: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:14.473: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:15.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:15.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:15.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:16.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:16.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:16.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:17.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:17.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:17.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:17.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:18.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:18.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:18.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:18.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:19.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:19.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:19.466: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:19.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:20.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:20.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:20.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:20.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:21.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:21.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:21.466: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:21.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:22.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:22.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:22.466: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:22.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:23.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:23.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:23.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:23.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:24.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:24.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:24.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:24.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:25.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:25.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:25.466: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:25.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:26.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:26.465: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:26.465: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:26.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:27.464: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:27.464: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:27.464: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:27.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:28.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:28.466: INFO: Wrong image for pod: daemon-set-pmlq5. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:28.466: INFO: Pod daemon-set-pmlq5 is not available May 19 21:19:28.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:29.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:29.466: INFO: Pod daemon-set-pmdl7 is not available May 19 21:19:29.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:30.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:30.465: INFO: Pod daemon-set-pmdl7 is not available May 19 21:19:30.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:31.723: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:31.723: INFO: Pod daemon-set-pmdl7 is not available May 19 21:19:31.737: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:32.464: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:32.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:33.464: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:33.467: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:34.466: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:34.466: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:34.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:35.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:35.465: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:35.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:36.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:36.465: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:36.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:37.483: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:37.483: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:37.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:38.465: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:38.465: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:38.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:39.464: INFO: Wrong image for pod: daemon-set-hzb7g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 19 21:19:39.464: INFO: Pod daemon-set-hzb7g is not available May 19 21:19:39.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:40.465: INFO: Pod daemon-set-dg6xw is not available May 19 21:19:40.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 19 21:19:40.474: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:40.477: INFO: Number of nodes with available pods: 1 May 19 21:19:40.477: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:19:41.483: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:41.487: INFO: Number of nodes with available pods: 1 May 19 21:19:41.487: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:19:42.483: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:42.486: INFO: Number of nodes with available pods: 1 May 19 21:19:42.486: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:19:43.484: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:43.487: INFO: Number of nodes with available pods: 2 May 19 21:19:43.487: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2892, will wait for the garbage collector to delete the pods May 19 21:19:43.560: INFO: Deleting DaemonSet.extensions daemon-set took: 6.916331ms May 19 21:19:43.860: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.256296ms May 19 21:19:49.564: INFO: Number of nodes with available pods: 0 May 19 21:19:49.564: INFO: Number of running nodes: 0, number of available pods: 0 May 19 21:19:49.567: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2892/daemonsets","resourceVersion":"17526215"},"items":null} May 19 21:19:49.570: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2892/pods","resourceVersion":"17526215"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:19:49.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2892" for this suite. • [SLOW TEST:40.334 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":44,"skipped":776,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:19:49.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:19:49.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3" in namespace "projected-6555" to be "success or failure" May 19 21:19:49.725: INFO: Pod "downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3": Phase="Pending", Reason="", readiness=false. Elapsed: 53.140595ms May 19 21:19:51.729: INFO: Pod "downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057188881s May 19 21:19:53.732: INFO: Pod "downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060970619s STEP: Saw pod success May 19 21:19:53.732: INFO: Pod "downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3" satisfied condition "success or failure" May 19 21:19:53.735: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3 container client-container: STEP: delete the pod May 19 21:19:53.771: INFO: Waiting for pod downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3 to disappear May 19 21:19:53.776: INFO: Pod downwardapi-volume-c52dd829-4e45-4842-8110-2c2310b579d3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:19:53.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6555" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":781,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:19:53.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 21:19:53.896: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:53.901: INFO: Number of nodes with available pods: 0 May 19 21:19:53.901: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:54.906: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:54.910: INFO: Number of nodes with available pods: 0 May 19 21:19:54.910: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:55.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:55.910: INFO: Number of nodes with available pods: 0 May 19 21:19:55.910: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:56.911: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:56.922: INFO: Number of nodes with available pods: 0 May 19 21:19:56.922: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:57.907: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:57.911: INFO: Number of nodes with available pods: 1 May 19 21:19:57.911: INFO: Node jerma-worker is running more than one daemon pod May 19 21:19:58.906: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:58.909: INFO: Number of nodes with available pods: 2 May 19 21:19:58.909: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 19 21:19:58.965: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:19:59.035: INFO: Number of nodes with available pods: 1 May 19 21:19:59.035: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:20:00.039: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:20:00.043: INFO: Number of nodes with available pods: 1 May 19 21:20:00.043: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:20:01.072: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:20:01.084: INFO: Number of nodes with available pods: 1 May 19 21:20:01.084: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:20:02.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:20:02.045: INFO: Number of nodes with available pods: 1 May 19 21:20:02.045: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:20:03.040: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:20:03.044: INFO: Number of nodes with available pods: 2 May 19 21:20:03.044: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2441, will wait for the garbage collector to delete the pods May 19 21:20:03.146: INFO: Deleting DaemonSet.extensions daemon-set took: 43.748471ms May 19 21:20:03.246: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.269347ms May 19 21:20:09.550: INFO: Number of nodes with available pods: 0 May 19 21:20:09.550: INFO: Number of running nodes: 0, number of available pods: 0 May 19 21:20:09.553: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2441/daemonsets","resourceVersion":"17526388"},"items":null} May 19 21:20:09.556: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2441/pods","resourceVersion":"17526388"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:20:09.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2441" for this suite. • [SLOW TEST:15.789 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":46,"skipped":798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:20:09.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8233 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8233 STEP: creating replication controller externalsvc in namespace services-8233 I0519 21:20:09.789699 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8233, replica count: 2 I0519 21:20:12.840060 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:20:15.840245 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 19 21:20:15.926: INFO: Creating new exec pod May 19 21:20:19.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8233 execpodpr47r -- /bin/sh -x -c nslookup clusterip-service' May 19 21:20:20.280: INFO: stderr: "I0519 21:20:20.089617 1020 log.go:172] (0xc000566dc0) (0xc000739b80) Create stream\nI0519 21:20:20.089695 1020 log.go:172] (0xc000566dc0) (0xc000739b80) Stream added, broadcasting: 1\nI0519 21:20:20.092680 1020 log.go:172] (0xc000566dc0) Reply frame received for 1\nI0519 21:20:20.092726 1020 log.go:172] (0xc000566dc0) (0xc000b56000) Create stream\nI0519 21:20:20.092740 1020 log.go:172] (0xc000566dc0) (0xc000b56000) Stream added, broadcasting: 3\nI0519 21:20:20.093981 1020 log.go:172] (0xc000566dc0) Reply frame received for 3\nI0519 21:20:20.094015 1020 log.go:172] (0xc000566dc0) (0xc000739d60) Create stream\nI0519 21:20:20.094027 1020 log.go:172] (0xc000566dc0) (0xc000739d60) Stream added, broadcasting: 5\nI0519 21:20:20.094953 1020 log.go:172] (0xc000566dc0) Reply frame received for 5\nI0519 21:20:20.177057 1020 log.go:172] (0xc000566dc0) Data frame received for 5\nI0519 21:20:20.177086 1020 log.go:172] (0xc000739d60) (5) Data frame handling\nI0519 21:20:20.177102 1020 log.go:172] (0xc000739d60) (5) Data frame sent\n+ nslookup clusterip-service\nI0519 21:20:20.272657 1020 log.go:172] (0xc000566dc0) Data frame received for 3\nI0519 21:20:20.272685 1020 log.go:172] (0xc000b56000) (3) Data frame handling\nI0519 21:20:20.272702 1020 log.go:172] (0xc000b56000) (3) Data frame sent\nI0519 21:20:20.274073 1020 log.go:172] (0xc000566dc0) Data frame received for 3\nI0519 21:20:20.274089 1020 log.go:172] (0xc000b56000) (3) Data frame handling\nI0519 21:20:20.274102 1020 log.go:172] (0xc000b56000) (3) Data frame sent\nI0519 21:20:20.274741 1020 log.go:172] (0xc000566dc0) Data frame received for 5\nI0519 21:20:20.274777 1020 log.go:172] (0xc000739d60) (5) Data frame handling\nI0519 21:20:20.274806 1020 log.go:172] (0xc000566dc0) Data frame received for 3\nI0519 21:20:20.274817 1020 log.go:172] (0xc000b56000) (3) Data frame handling\nI0519 21:20:20.276780 1020 log.go:172] (0xc000566dc0) Data frame received for 1\nI0519 21:20:20.276795 1020 log.go:172] (0xc000739b80) (1) Data frame handling\nI0519 21:20:20.276803 1020 log.go:172] (0xc000739b80) (1) Data frame sent\nI0519 21:20:20.276812 1020 log.go:172] (0xc000566dc0) (0xc000739b80) Stream removed, broadcasting: 1\nI0519 21:20:20.276824 1020 log.go:172] (0xc000566dc0) Go away received\nI0519 21:20:20.277330 1020 log.go:172] (0xc000566dc0) (0xc000739b80) Stream removed, broadcasting: 1\nI0519 21:20:20.277344 1020 log.go:172] (0xc000566dc0) (0xc000b56000) Stream removed, broadcasting: 3\nI0519 21:20:20.277350 1020 log.go:172] (0xc000566dc0) (0xc000739d60) Stream removed, broadcasting: 5\n" May 19 21:20:20.280: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8233.svc.cluster.local\tcanonical name = externalsvc.services-8233.svc.cluster.local.\nName:\texternalsvc.services-8233.svc.cluster.local\nAddress: 10.107.205.100\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8233, will wait for the garbage collector to delete the pods May 19 21:20:20.342: INFO: Deleting ReplicationController externalsvc took: 8.185847ms May 19 21:20:20.642: INFO: Terminating ReplicationController externalsvc pods took: 300.267103ms May 19 21:20:29.564: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:20:29.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8233" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.059 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":47,"skipped":881,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:20:29.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 19 21:20:33.757: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 19 21:20:53.852: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:20:53.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4284" for this suite. • [SLOW TEST:24.229 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":48,"skipped":892,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:20:53.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 21:20:53.954: INFO: Waiting up to 5m0s for pod "pod-594dbc06-eaa6-4269-801b-9136c29d0d84" in namespace "emptydir-121" to be "success or failure" May 19 21:20:53.972: INFO: Pod "pod-594dbc06-eaa6-4269-801b-9136c29d0d84": Phase="Pending", Reason="", readiness=false. Elapsed: 18.210406ms May 19 21:20:56.049: INFO: Pod "pod-594dbc06-eaa6-4269-801b-9136c29d0d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095354597s May 19 21:20:58.054: INFO: Pod "pod-594dbc06-eaa6-4269-801b-9136c29d0d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099725881s STEP: Saw pod success May 19 21:20:58.054: INFO: Pod "pod-594dbc06-eaa6-4269-801b-9136c29d0d84" satisfied condition "success or failure" May 19 21:20:58.057: INFO: Trying to get logs from node jerma-worker pod pod-594dbc06-eaa6-4269-801b-9136c29d0d84 container test-container: STEP: delete the pod May 19 21:20:58.116: INFO: Waiting for pod pod-594dbc06-eaa6-4269-801b-9136c29d0d84 to disappear May 19 21:20:58.157: INFO: Pod pod-594dbc06-eaa6-4269-801b-9136c29d0d84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:20:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-121" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":907,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:20:58.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:20:58.232: INFO: Creating deployment "test-recreate-deployment" May 19 21:20:58.246: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 19 21:20:58.335: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 19 21:21:00.342: INFO: Waiting deployment "test-recreate-deployment" to complete May 19 21:21:00.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520058, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520058, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520058, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520058, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:21:02.350: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 19 21:21:02.355: INFO: Updating deployment test-recreate-deployment May 19 21:21:02.355: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 19 21:21:02.922: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-253 /apis/apps/v1/namespaces/deployment-253/deployments/test-recreate-deployment 78cb496a-2e4f-4d30-8354-8e9a772d037c 17526739 2 2020-05-19 21:20:58 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c8db58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-19 21:21:02 +0000 UTC,LastTransitionTime:2020-05-19 21:21:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-19 21:21:02 +0000 UTC,LastTransitionTime:2020-05-19 21:20:58 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 19 21:21:02.931: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-253 /apis/apps/v1/namespaces/deployment-253/replicasets/test-recreate-deployment-5f94c574ff 7b498099-3d3c-460f-ab4c-cad3fd2318dd 17526737 1 2020-05-19 21:21:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 78cb496a-2e4f-4d30-8354-8e9a772d037c 0xc006c8dee7 0xc006c8dee8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c8df48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 21:21:02.931: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 19 21:21:02.931: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-253 /apis/apps/v1/namespaces/deployment-253/replicasets/test-recreate-deployment-799c574856 8b8f5634-e0f9-4da3-b085-702ef848e94a 17526726 2 2020-05-19 21:20:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 78cb496a-2e4f-4d30-8354-8e9a772d037c 0xc006c8dfb7 0xc006c8dfb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000d5e028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 21:21:02.934: INFO: Pod "test-recreate-deployment-5f94c574ff-v7s24" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-v7s24 test-recreate-deployment-5f94c574ff- deployment-253 /api/v1/namespaces/deployment-253/pods/test-recreate-deployment-5f94c574ff-v7s24 f48b38f2-a0cf-42a3-92e3-79c565d502e9 17526740 0 2020-05-19 21:21:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 7b498099-3d3c-460f-ab4c-cad3fd2318dd 0xc000d5e4a7 0xc000d5e4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v9blj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v9blj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v9blj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:21:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:21:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:21:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:21:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-19 21:21:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:21:02.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-253" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":50,"skipped":929,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:21:02.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-c6ffb6be-dd58-4e1c-9f9c-0aa5fed4d708 STEP: Creating a pod to test consume configMaps May 19 21:21:03.047: INFO: Waiting up to 5m0s for pod "pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac" in namespace "configmap-827" to be "success or failure" May 19 21:21:03.080: INFO: Pod "pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 33.506648ms May 19 21:21:05.149: INFO: Pod "pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102073382s May 19 21:21:07.153: INFO: Pod "pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10597846s STEP: Saw pod success May 19 21:21:07.153: INFO: Pod "pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac" satisfied condition "success or failure" May 19 21:21:07.156: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac container configmap-volume-test: STEP: delete the pod May 19 21:21:07.178: INFO: Waiting for pod pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac to disappear May 19 21:21:07.182: INFO: Pod pod-configmaps-13085d85-a964-4eea-b34b-49794ebfc3ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:21:07.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-827" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:21:07.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 21:21:15.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 21:21:15.445: INFO: Pod pod-with-poststart-http-hook still exists May 19 21:21:17.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 21:21:17.448: INFO: Pod pod-with-poststart-http-hook still exists May 19 21:21:19.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 21:21:19.450: INFO: Pod pod-with-poststart-http-hook still exists May 19 21:21:21.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 19 21:21:21.448: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:21:21.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2810" for this suite. • [SLOW TEST:14.267 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":955,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:21:21.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 21:21:29.596: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:29.664: INFO: Pod pod-with-prestop-http-hook still exists May 19 21:21:31.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:31.808: INFO: Pod pod-with-prestop-http-hook still exists May 19 21:21:33.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:33.670: INFO: Pod pod-with-prestop-http-hook still exists May 19 21:21:35.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:35.682: INFO: Pod pod-with-prestop-http-hook still exists May 19 21:21:37.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:37.668: INFO: Pod pod-with-prestop-http-hook still exists May 19 21:21:39.664: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 21:21:39.667: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:21:39.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4708" for this suite. • [SLOW TEST:18.260 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":962,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:21:39.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 19 21:21:39.764: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 21:21:39.782: INFO: Waiting for terminating namespaces to be deleted... May 19 21:21:39.785: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 19 21:21:39.790: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:21:39.791: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:21:39.791: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:21:39.791: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:21:39.791: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 19 21:21:39.796: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 19 21:21:39.796: INFO: Container kube-hunter ready: false, restart count 0 May 19 21:21:39.796: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:21:39.796: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:21:39.796: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 19 21:21:39.796: INFO: Container kube-bench ready: false, restart count 0 May 19 21:21:39.797: INFO: pod-handle-http-request from container-lifecycle-hook-4708 started at 2020-05-19 21:21:21 +0000 UTC (1 container statuses recorded) May 19 21:21:39.797: INFO: Container pod-handle-http-request ready: true, restart count 0 May 19 21:21:39.797: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:21:39.797: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c1d82794-8605-4830-9a01-14aea6843ce0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c1d82794-8605-4830-9a01-14aea6843ce0 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1d82794-8605-4830-9a01-14aea6843ce0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:21:56.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-10" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.409 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":54,"skipped":962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:21:56.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5109.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.4.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.4.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.4.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.4.255_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5109.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5109.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5109.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5109.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5109.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.4.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.4.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.4.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.4.255_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:22:02.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.455: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.459: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.632: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.639: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:02.655: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:07.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.681: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.710: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.712: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.718: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:07.736: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:12.659: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.663: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.666: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.669: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.691: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.694: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.696: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.698: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:12.715: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:17.660: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.664: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.667: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.670: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.692: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.695: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.698: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.701: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:17.717: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:22.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.683: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.706: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.709: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.711: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.714: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:22.731: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:27.660: INFO: Unable to read wheezy_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.663: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.667: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.670: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.688: INFO: Unable to read jessie_udp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.693: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.696: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local from pod dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988: the server could not find the requested resource (get pods dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988) May 19 21:22:27.711: INFO: Lookups using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 failed for: [wheezy_udp@dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@dns-test-service.dns-5109.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_udp@dns-test-service.dns-5109.svc.cluster.local jessie_tcp@dns-test-service.dns-5109.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5109.svc.cluster.local] May 19 21:22:32.731: INFO: DNS probes using dns-5109/dns-test-73b21f3e-fb1a-42a6-bcbb-7afc91740988 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:22:33.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5109" for this suite. • [SLOW TEST:37.526 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":55,"skipped":989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:22:33.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:22:33.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1" in namespace "downward-api-5859" to be "success or failure" May 19 21:22:33.712: INFO: Pod "downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.312622ms May 19 21:22:35.716: INFO: Pod "downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007483728s May 19 21:22:37.720: INFO: Pod "downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011270158s STEP: Saw pod success May 19 21:22:37.720: INFO: Pod "downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1" satisfied condition "success or failure" May 19 21:22:37.723: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1 container client-container: STEP: delete the pod May 19 21:22:37.756: INFO: Waiting for pod downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1 to disappear May 19 21:22:37.772: INFO: Pod downwardapi-volume-05c7daad-47ed-4237-9e9a-64d24e9102f1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:22:37.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5859" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1018,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:22:37.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b1bd1c7e-b7fa-4ac9-8683-c9523391a956 STEP: Creating a pod to test consume secrets May 19 21:22:38.277: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75" in namespace "projected-272" to be "success or failure" May 19 21:22:38.299: INFO: Pod "pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75": Phase="Pending", Reason="", readiness=false. Elapsed: 22.384252ms May 19 21:22:40.305: INFO: Pod "pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028255008s May 19 21:22:42.309: INFO: Pod "pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032152125s STEP: Saw pod success May 19 21:22:42.309: INFO: Pod "pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75" satisfied condition "success or failure" May 19 21:22:42.311: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75 container projected-secret-volume-test: STEP: delete the pod May 19 21:22:42.388: INFO: Waiting for pod pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75 to disappear May 19 21:22:42.394: INFO: Pod pod-projected-secrets-fb890ed9-09e1-4f9b-83c2-caa1e5985b75 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:22:42.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-272" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1063,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:22:42.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:22:42.510: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831" in namespace "downward-api-2225" to be "success or failure" May 19 21:22:42.513: INFO: Pod "downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831": Phase="Pending", Reason="", readiness=false. Elapsed: 3.307917ms May 19 21:22:44.517: INFO: Pod "downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007179167s May 19 21:22:46.521: INFO: Pod "downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011263356s STEP: Saw pod success May 19 21:22:46.522: INFO: Pod "downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831" satisfied condition "success or failure" May 19 21:22:46.532: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831 container client-container: STEP: delete the pod May 19 21:22:46.563: INFO: Waiting for pod downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831 to disappear May 19 21:22:46.579: INFO: Pod downwardapi-volume-2f0f2d80-f0d0-4d00-a15a-d1780df7a831 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:22:46.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2225" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1064,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:22:46.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:22:57.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7781" for this suite. • [SLOW TEST:11.289 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":59,"skipped":1077,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:22:57.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a2e60bdf-f668-4451-aef9-5d3142181e8b STEP: Creating a pod to test consume configMaps May 19 21:22:57.984: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5" in namespace "projected-3772" to be "success or failure" May 19 21:22:57.988: INFO: Pod "pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165027ms May 19 21:23:00.024: INFO: Pod "pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039962866s May 19 21:23:02.030: INFO: Pod "pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045823905s STEP: Saw pod success May 19 21:23:02.030: INFO: Pod "pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5" satisfied condition "success or failure" May 19 21:23:02.042: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5 container projected-configmap-volume-test: STEP: delete the pod May 19 21:23:02.092: INFO: Waiting for pod pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5 to disappear May 19 21:23:02.102: INFO: Pod pod-projected-configmaps-5cb18cba-e661-4e46-b2d5-63d6fe45f4e5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:02.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3772" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1089,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:02.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 19 21:23:06.790: INFO: Successfully updated pod "annotationupdateea33d5b6-e935-4282-80e2-ae12fb2df393" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:08.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2001" for this suite. • [SLOW TEST:6.802 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1142,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:08.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:23:09.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 19 21:23:09.158: INFO: stderr: "" May 19 21:23:09.158: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:09.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3564" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":62,"skipped":1157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:09.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 19 21:23:15.253: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5313 PodName:pod-sharedvolume-d475a0f9-75a8-41fe-a54b-427b96cd7407 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:23:15.253: INFO: >>> kubeConfig: /root/.kube/config I0519 21:23:15.278634 6 log.go:172] (0xc002971ad0) (0xc0022e40a0) Create stream I0519 21:23:15.278671 6 log.go:172] (0xc002971ad0) (0xc0022e40a0) Stream added, broadcasting: 1 I0519 21:23:15.280876 6 log.go:172] (0xc002971ad0) Reply frame received for 1 I0519 21:23:15.280916 6 log.go:172] (0xc002971ad0) (0xc001a2ebe0) Create stream I0519 21:23:15.280925 6 log.go:172] (0xc002971ad0) (0xc001a2ebe0) Stream added, broadcasting: 3 I0519 21:23:15.282066 6 log.go:172] (0xc002971ad0) Reply frame received for 3 I0519 21:23:15.282100 6 log.go:172] (0xc002971ad0) (0xc001e766e0) Create stream I0519 21:23:15.282118 6 log.go:172] (0xc002971ad0) (0xc001e766e0) Stream added, broadcasting: 5 I0519 21:23:15.282952 6 log.go:172] (0xc002971ad0) Reply frame received for 5 I0519 21:23:15.341973 6 log.go:172] (0xc002971ad0) Data frame received for 3 I0519 21:23:15.342004 6 log.go:172] (0xc001a2ebe0) (3) Data frame handling I0519 21:23:15.342022 6 log.go:172] (0xc001a2ebe0) (3) Data frame sent I0519 21:23:15.342038 6 log.go:172] (0xc002971ad0) Data frame received for 3 I0519 21:23:15.342050 6 log.go:172] (0xc001a2ebe0) (3) Data frame handling I0519 21:23:15.342074 6 log.go:172] (0xc002971ad0) Data frame received for 5 I0519 21:23:15.342117 6 log.go:172] (0xc001e766e0) (5) Data frame handling I0519 21:23:15.344229 6 log.go:172] (0xc002971ad0) Data frame received for 1 I0519 21:23:15.344248 6 log.go:172] (0xc0022e40a0) (1) Data frame handling I0519 21:23:15.344260 6 log.go:172] (0xc0022e40a0) (1) Data frame sent I0519 21:23:15.344273 6 log.go:172] (0xc002971ad0) (0xc0022e40a0) Stream removed, broadcasting: 1 I0519 21:23:15.344327 6 log.go:172] (0xc002971ad0) Go away received I0519 21:23:15.344365 6 log.go:172] (0xc002971ad0) (0xc0022e40a0) Stream removed, broadcasting: 1 I0519 21:23:15.344382 6 log.go:172] (0xc002971ad0) (0xc001a2ebe0) Stream removed, broadcasting: 3 I0519 21:23:15.344388 6 log.go:172] (0xc002971ad0) (0xc001e766e0) Stream removed, broadcasting: 5 May 19 21:23:15.344: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:15.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5313" for this suite. • [SLOW TEST:6.182 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":63,"skipped":1203,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:15.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 21:23:16.102: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 21:23:18.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520196, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520196, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520196, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520196, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:23:21.157: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:23:21.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7263" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.114 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":64,"skipped":1216,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:22.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:38.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7894" for this suite. • [SLOW TEST:16.098 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":65,"skipped":1232,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:38.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8874" for this suite. • [SLOW TEST:16.313 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":66,"skipped":1239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:54.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:23:54.961: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:23:59.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5349" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1282,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:23:59.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-07313106-4eef-4eb3-b181-ebb896fc3ea8 STEP: Creating a pod to test consume secrets May 19 21:23:59.118: INFO: Waiting up to 5m0s for pod "pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa" in namespace "secrets-1997" to be "success or failure" May 19 21:23:59.181: INFO: Pod "pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa": Phase="Pending", Reason="", readiness=false. Elapsed: 62.451767ms May 19 21:24:01.185: INFO: Pod "pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066498845s May 19 21:24:03.189: INFO: Pod "pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070787366s STEP: Saw pod success May 19 21:24:03.189: INFO: Pod "pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa" satisfied condition "success or failure" May 19 21:24:03.192: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa container secret-volume-test: STEP: delete the pod May 19 21:24:03.231: INFO: Waiting for pod pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa to disappear May 19 21:24:03.260: INFO: Pod pod-secrets-d01962ad-00df-4800-83da-85f0dc25a8fa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:03.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1997" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1282,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:03.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:24:03.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363" in namespace "projected-612" to be "success or failure" May 19 21:24:03.356: INFO: Pod "downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855239ms May 19 21:24:05.433: INFO: Pod "downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08109167s May 19 21:24:07.437: INFO: Pod "downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084728007s STEP: Saw pod success May 19 21:24:07.437: INFO: Pod "downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363" satisfied condition "success or failure" May 19 21:24:07.440: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363 container client-container: STEP: delete the pod May 19 21:24:07.459: INFO: Waiting for pod downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363 to disappear May 19 21:24:07.468: INFO: Pod downwardapi-volume-772e66e8-54ab-42b5-a18a-47774863e363 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:07.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-612" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:07.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:24:08.135: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:24:10.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520248, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520248, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520248, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520248, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:24:13.184: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:24:13.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4259-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:14.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7475" for this suite. STEP: Destroying namespace "webhook-7475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":70,"skipped":1332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bzzf STEP: Creating a pod to test atomic-volume-subpath May 19 21:24:14.610: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bzzf" in namespace "subpath-5001" to be "success or failure" May 19 21:24:14.648: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Pending", Reason="", readiness=false. Elapsed: 38.086149ms May 19 21:24:16.742: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132167886s May 19 21:24:18.747: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 4.137107967s May 19 21:24:20.752: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 6.14226965s May 19 21:24:22.756: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 8.145936554s May 19 21:24:24.760: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 10.149939352s May 19 21:24:26.764: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 12.153652657s May 19 21:24:28.768: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 14.157827612s May 19 21:24:30.772: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 16.162070714s May 19 21:24:32.776: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 18.166140469s May 19 21:24:34.998: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 20.387631263s May 19 21:24:37.004: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Running", Reason="", readiness=true. Elapsed: 22.393775069s May 19 21:24:39.008: INFO: Pod "pod-subpath-test-configmap-bzzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.397957836s STEP: Saw pod success May 19 21:24:39.008: INFO: Pod "pod-subpath-test-configmap-bzzf" satisfied condition "success or failure" May 19 21:24:39.011: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-bzzf container test-container-subpath-configmap-bzzf: STEP: delete the pod May 19 21:24:39.028: INFO: Waiting for pod pod-subpath-test-configmap-bzzf to disappear May 19 21:24:39.039: INFO: Pod pod-subpath-test-configmap-bzzf no longer exists STEP: Deleting pod pod-subpath-test-configmap-bzzf May 19 21:24:39.039: INFO: Deleting pod "pod-subpath-test-configmap-bzzf" in namespace "subpath-5001" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:39.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5001" for this suite. • [SLOW TEST:24.529 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":71,"skipped":1357,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:39.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f03bb2c9-ea97-4b56-8fdf-b56fe21e3b99 STEP: Creating configMap with name cm-test-opt-upd-ee165a94-1be1-48a6-89f1-ff10b0af877d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f03bb2c9-ea97-4b56-8fdf-b56fe21e3b99 STEP: Updating configmap cm-test-opt-upd-ee165a94-1be1-48a6-89f1-ff10b0af877d STEP: Creating configMap with name cm-test-opt-create-b2932916-08f4-4d08-b9f3-e9d7eeebca1e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:47.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3640" for this suite. • [SLOW TEST:8.537 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1369,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:47.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:24:47.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0" in namespace "projected-2524" to be "success or failure" May 19 21:24:47.747: INFO: Pod "downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.368891ms May 19 21:24:49.770: INFO: Pod "downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026319837s May 19 21:24:51.773: INFO: Pod "downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029913119s STEP: Saw pod success May 19 21:24:51.774: INFO: Pod "downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0" satisfied condition "success or failure" May 19 21:24:51.776: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0 container client-container: STEP: delete the pod May 19 21:24:51.822: INFO: Waiting for pod downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0 to disappear May 19 21:24:51.831: INFO: Pod downwardapi-volume-e8186285-7d7c-4090-a44c-8006a447bca0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:51.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2524" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1387,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:51.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-bde1d670-3a2e-4d1c-ba31-35dc3adc1157 STEP: Creating secret with name secret-projected-all-test-volume-01cc6dd6-66dd-4bf9-a0c4-bce31694febf STEP: Creating a pod to test Check all projections for projected volume plugin May 19 21:24:51.929: INFO: Waiting up to 5m0s for pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25" in namespace "projected-4007" to be "success or failure" May 19 21:24:51.963: INFO: Pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25": Phase="Pending", Reason="", readiness=false. Elapsed: 34.244437ms May 19 21:24:53.967: INFO: Pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038300414s May 19 21:24:55.984: INFO: Pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25": Phase="Running", Reason="", readiness=true. Elapsed: 4.055453937s May 19 21:24:57.989: INFO: Pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059851897s STEP: Saw pod success May 19 21:24:57.989: INFO: Pod "projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25" satisfied condition "success or failure" May 19 21:24:57.992: INFO: Trying to get logs from node jerma-worker pod projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25 container projected-all-volume-test: STEP: delete the pod May 19 21:24:58.011: INFO: Waiting for pod projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25 to disappear May 19 21:24:58.016: INFO: Pod projected-volume-c1506050-d228-4dc7-b2a2-cab61ca1da25 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:24:58.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4007" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1409,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:24:58.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7067 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7067 I0519 21:24:58.358868 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7067, replica count: 2 I0519 21:25:01.409386 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:25:04.409639 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 21:25:04.409: INFO: Creating new exec pod May 19 21:25:09.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpodplcrw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 19 21:25:09.710: INFO: stderr: "I0519 21:25:09.592969 1077 log.go:172] (0xc000a1a580) (0xc000a060a0) Create stream\nI0519 21:25:09.593021 1077 log.go:172] (0xc000a1a580) (0xc000a060a0) Stream added, broadcasting: 1\nI0519 21:25:09.596044 1077 log.go:172] (0xc000a1a580) Reply frame received for 1\nI0519 21:25:09.596080 1077 log.go:172] (0xc000a1a580) (0xc000a06140) Create stream\nI0519 21:25:09.596093 1077 log.go:172] (0xc000a1a580) (0xc000a06140) Stream added, broadcasting: 3\nI0519 21:25:09.597306 1077 log.go:172] (0xc000a1a580) Reply frame received for 3\nI0519 21:25:09.597350 1077 log.go:172] (0xc000a1a580) (0xc000a061e0) Create stream\nI0519 21:25:09.597374 1077 log.go:172] (0xc000a1a580) (0xc000a061e0) Stream added, broadcasting: 5\nI0519 21:25:09.598391 1077 log.go:172] (0xc000a1a580) Reply frame received for 5\nI0519 21:25:09.677842 1077 log.go:172] (0xc000a1a580) Data frame received for 5\nI0519 21:25:09.677866 1077 log.go:172] (0xc000a061e0) (5) Data frame handling\nI0519 21:25:09.677880 1077 log.go:172] (0xc000a061e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0519 21:25:09.700677 1077 log.go:172] (0xc000a1a580) Data frame received for 3\nI0519 21:25:09.700707 1077 log.go:172] (0xc000a06140) (3) Data frame handling\nI0519 21:25:09.700738 1077 log.go:172] (0xc000a1a580) Data frame received for 5\nI0519 21:25:09.700749 1077 log.go:172] (0xc000a061e0) (5) Data frame handling\nI0519 21:25:09.700760 1077 log.go:172] (0xc000a061e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0519 21:25:09.700826 1077 log.go:172] (0xc000a1a580) Data frame received for 5\nI0519 21:25:09.700848 1077 log.go:172] (0xc000a061e0) (5) Data frame handling\nI0519 21:25:09.703240 1077 log.go:172] (0xc000a1a580) Data frame received for 1\nI0519 21:25:09.703348 1077 log.go:172] (0xc000a060a0) (1) Data frame handling\nI0519 21:25:09.703444 1077 log.go:172] (0xc000a060a0) (1) Data frame sent\nI0519 21:25:09.703489 1077 log.go:172] (0xc000a1a580) (0xc000a060a0) Stream removed, broadcasting: 1\nI0519 21:25:09.703527 1077 log.go:172] (0xc000a1a580) Go away received\nI0519 21:25:09.704007 1077 log.go:172] (0xc000a1a580) (0xc000a060a0) Stream removed, broadcasting: 1\nI0519 21:25:09.704035 1077 log.go:172] (0xc000a1a580) (0xc000a06140) Stream removed, broadcasting: 3\nI0519 21:25:09.704046 1077 log.go:172] (0xc000a1a580) (0xc000a061e0) Stream removed, broadcasting: 5\n" May 19 21:25:09.710: INFO: stdout: "" May 19 21:25:09.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpodplcrw -- /bin/sh -x -c nc -zv -t -w 2 10.96.212.151 80' May 19 21:25:09.903: INFO: stderr: "I0519 21:25:09.835459 1097 log.go:172] (0xc0007189a0) (0xc0007361e0) Create stream\nI0519 21:25:09.835555 1097 log.go:172] (0xc0007189a0) (0xc0007361e0) Stream added, broadcasting: 1\nI0519 21:25:09.840216 1097 log.go:172] (0xc0007189a0) Reply frame received for 1\nI0519 21:25:09.840283 1097 log.go:172] (0xc0007189a0) (0xc00021d5e0) Create stream\nI0519 21:25:09.840297 1097 log.go:172] (0xc0007189a0) (0xc00021d5e0) Stream added, broadcasting: 3\nI0519 21:25:09.841750 1097 log.go:172] (0xc0007189a0) Reply frame received for 3\nI0519 21:25:09.841808 1097 log.go:172] (0xc0007189a0) (0xc000637c20) Create stream\nI0519 21:25:09.841824 1097 log.go:172] (0xc0007189a0) (0xc000637c20) Stream added, broadcasting: 5\nI0519 21:25:09.842996 1097 log.go:172] (0xc0007189a0) Reply frame received for 5\nI0519 21:25:09.897559 1097 log.go:172] (0xc0007189a0) Data frame received for 3\nI0519 21:25:09.897597 1097 log.go:172] (0xc00021d5e0) (3) Data frame handling\nI0519 21:25:09.897619 1097 log.go:172] (0xc0007189a0) Data frame received for 5\nI0519 21:25:09.897629 1097 log.go:172] (0xc000637c20) (5) Data frame handling\nI0519 21:25:09.897646 1097 log.go:172] (0xc000637c20) (5) Data frame sent\nI0519 21:25:09.897658 1097 log.go:172] (0xc0007189a0) Data frame received for 5\nI0519 21:25:09.897666 1097 log.go:172] (0xc000637c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.212.151 80\nConnection to 10.96.212.151 80 port [tcp/http] succeeded!\nI0519 21:25:09.898686 1097 log.go:172] (0xc0007189a0) Data frame received for 1\nI0519 21:25:09.898709 1097 log.go:172] (0xc0007361e0) (1) Data frame handling\nI0519 21:25:09.898721 1097 log.go:172] (0xc0007361e0) (1) Data frame sent\nI0519 21:25:09.898732 1097 log.go:172] (0xc0007189a0) (0xc0007361e0) Stream removed, broadcasting: 1\nI0519 21:25:09.898748 1097 log.go:172] (0xc0007189a0) Go away received\nI0519 21:25:09.899146 1097 log.go:172] (0xc0007189a0) (0xc0007361e0) Stream removed, broadcasting: 1\nI0519 21:25:09.899166 1097 log.go:172] (0xc0007189a0) (0xc00021d5e0) Stream removed, broadcasting: 3\nI0519 21:25:09.899176 1097 log.go:172] (0xc0007189a0) (0xc000637c20) Stream removed, broadcasting: 5\n" May 19 21:25:09.904: INFO: stdout: "" May 19 21:25:09.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpodplcrw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30122' May 19 21:25:10.122: INFO: stderr: "I0519 21:25:10.049980 1120 log.go:172] (0xc000930e70) (0xc00094e280) Create stream\nI0519 21:25:10.050040 1120 log.go:172] (0xc000930e70) (0xc00094e280) Stream added, broadcasting: 1\nI0519 21:25:10.054787 1120 log.go:172] (0xc000930e70) Reply frame received for 1\nI0519 21:25:10.054849 1120 log.go:172] (0xc000930e70) (0xc00090a0a0) Create stream\nI0519 21:25:10.054875 1120 log.go:172] (0xc000930e70) (0xc00090a0a0) Stream added, broadcasting: 3\nI0519 21:25:10.056026 1120 log.go:172] (0xc000930e70) Reply frame received for 3\nI0519 21:25:10.056068 1120 log.go:172] (0xc000930e70) (0xc00094e320) Create stream\nI0519 21:25:10.056080 1120 log.go:172] (0xc000930e70) (0xc00094e320) Stream added, broadcasting: 5\nI0519 21:25:10.057668 1120 log.go:172] (0xc000930e70) Reply frame received for 5\nI0519 21:25:10.116653 1120 log.go:172] (0xc000930e70) Data frame received for 3\nI0519 21:25:10.116707 1120 log.go:172] (0xc00090a0a0) (3) Data frame handling\nI0519 21:25:10.116743 1120 log.go:172] (0xc000930e70) Data frame received for 5\nI0519 21:25:10.116763 1120 log.go:172] (0xc00094e320) (5) Data frame handling\nI0519 21:25:10.116785 1120 log.go:172] (0xc00094e320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30122\nConnection to 172.17.0.10 30122 port [tcp/30122] succeeded!\nI0519 21:25:10.116856 1120 log.go:172] (0xc000930e70) Data frame received for 5\nI0519 21:25:10.116876 1120 log.go:172] (0xc00094e320) (5) Data frame handling\nI0519 21:25:10.118812 1120 log.go:172] (0xc000930e70) Data frame received for 1\nI0519 21:25:10.118830 1120 log.go:172] (0xc00094e280) (1) Data frame handling\nI0519 21:25:10.118840 1120 log.go:172] (0xc00094e280) (1) Data frame sent\nI0519 21:25:10.118853 1120 log.go:172] (0xc000930e70) (0xc00094e280) Stream removed, broadcasting: 1\nI0519 21:25:10.118921 1120 log.go:172] (0xc000930e70) Go away received\nI0519 21:25:10.119253 1120 log.go:172] (0xc000930e70) (0xc00094e280) Stream removed, broadcasting: 1\nI0519 21:25:10.119270 1120 log.go:172] (0xc000930e70) (0xc00090a0a0) Stream removed, broadcasting: 3\nI0519 21:25:10.119279 1120 log.go:172] (0xc000930e70) (0xc00094e320) Stream removed, broadcasting: 5\n" May 19 21:25:10.122: INFO: stdout: "" May 19 21:25:10.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7067 execpodplcrw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30122' May 19 21:25:10.331: INFO: stderr: "I0519 21:25:10.254616 1138 log.go:172] (0xc000a72000) (0xc00073e000) Create stream\nI0519 21:25:10.254705 1138 log.go:172] (0xc000a72000) (0xc00073e000) Stream added, broadcasting: 1\nI0519 21:25:10.257371 1138 log.go:172] (0xc000a72000) Reply frame received for 1\nI0519 21:25:10.257416 1138 log.go:172] (0xc000a72000) (0xc000942280) Create stream\nI0519 21:25:10.257430 1138 log.go:172] (0xc000a72000) (0xc000942280) Stream added, broadcasting: 3\nI0519 21:25:10.258468 1138 log.go:172] (0xc000a72000) Reply frame received for 3\nI0519 21:25:10.258502 1138 log.go:172] (0xc000a72000) (0xc000942320) Create stream\nI0519 21:25:10.258513 1138 log.go:172] (0xc000a72000) (0xc000942320) Stream added, broadcasting: 5\nI0519 21:25:10.259562 1138 log.go:172] (0xc000a72000) Reply frame received for 5\nI0519 21:25:10.323094 1138 log.go:172] (0xc000a72000) Data frame received for 3\nI0519 21:25:10.323136 1138 log.go:172] (0xc000942280) (3) Data frame handling\nI0519 21:25:10.323173 1138 log.go:172] (0xc000a72000) Data frame received for 5\nI0519 21:25:10.323187 1138 log.go:172] (0xc000942320) (5) Data frame handling\nI0519 21:25:10.323203 1138 log.go:172] (0xc000942320) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30122\nConnection to 172.17.0.8 30122 port [tcp/30122] succeeded!\nI0519 21:25:10.323245 1138 log.go:172] (0xc000a72000) Data frame received for 5\nI0519 21:25:10.323262 1138 log.go:172] (0xc000942320) (5) Data frame handling\nI0519 21:25:10.324871 1138 log.go:172] (0xc000a72000) Data frame received for 1\nI0519 21:25:10.324901 1138 log.go:172] (0xc00073e000) (1) Data frame handling\nI0519 21:25:10.324916 1138 log.go:172] (0xc00073e000) (1) Data frame sent\nI0519 21:25:10.324936 1138 log.go:172] (0xc000a72000) (0xc00073e000) Stream removed, broadcasting: 1\nI0519 21:25:10.325511 1138 log.go:172] (0xc000a72000) (0xc00073e000) Stream removed, broadcasting: 1\nI0519 21:25:10.325538 1138 log.go:172] (0xc000a72000) (0xc000942280) Stream removed, broadcasting: 3\nI0519 21:25:10.325548 1138 log.go:172] (0xc000a72000) (0xc000942320) Stream removed, broadcasting: 5\n" May 19 21:25:10.331: INFO: stdout: "" May 19 21:25:10.331: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:25:10.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7067" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.389 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":75,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:25:10.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e8571e7f-f49a-4aa1-b5ff-7fc8a3ae2726 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:25:10.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5815" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":76,"skipped":1490,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:25:10.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 19 21:25:10.569: INFO: >>> kubeConfig: /root/.kube/config May 19 21:25:13.560: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:25:25.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9800" for this suite. • [SLOW TEST:14.696 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":77,"skipped":1507,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:25:25.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:25:25.339: INFO: Create a RollingUpdate DaemonSet May 19 21:25:25.343: INFO: Check that daemon pods launch on every node of the cluster May 19 21:25:25.354: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:25.358: INFO: Number of nodes with available pods: 0 May 19 21:25:25.358: INFO: Node jerma-worker is running more than one daemon pod May 19 21:25:26.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:26.367: INFO: Number of nodes with available pods: 0 May 19 21:25:26.367: INFO: Node jerma-worker is running more than one daemon pod May 19 21:25:27.482: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:27.484: INFO: Number of nodes with available pods: 0 May 19 21:25:27.484: INFO: Node jerma-worker is running more than one daemon pod May 19 21:25:28.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:28.367: INFO: Number of nodes with available pods: 0 May 19 21:25:28.367: INFO: Node jerma-worker is running more than one daemon pod May 19 21:25:29.364: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:29.367: INFO: Number of nodes with available pods: 0 May 19 21:25:29.367: INFO: Node jerma-worker is running more than one daemon pod May 19 21:25:30.363: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:30.366: INFO: Number of nodes with available pods: 2 May 19 21:25:30.366: INFO: Number of running nodes: 2, number of available pods: 2 May 19 21:25:30.366: INFO: Update the DaemonSet to trigger a rollout May 19 21:25:30.372: INFO: Updating DaemonSet daemon-set May 19 21:25:34.392: INFO: Roll back the DaemonSet before rollout is complete May 19 21:25:34.398: INFO: Updating DaemonSet daemon-set May 19 21:25:34.398: INFO: Make sure DaemonSet rollback is complete May 19 21:25:34.407: INFO: Wrong image for pod: daemon-set-nmfkx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 19 21:25:34.407: INFO: Pod daemon-set-nmfkx is not available May 19 21:25:34.414: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:35.525: INFO: Wrong image for pod: daemon-set-nmfkx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 19 21:25:35.525: INFO: Pod daemon-set-nmfkx is not available May 19 21:25:35.777: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 21:25:36.418: INFO: Pod daemon-set-vjn68 is not available May 19 21:25:36.421: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1567, will wait for the garbage collector to delete the pods May 19 21:25:36.484: INFO: Deleting DaemonSet.extensions daemon-set took: 6.54508ms May 19 21:25:36.785: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.330261ms May 19 21:25:49.487: INFO: Number of nodes with available pods: 0 May 19 21:25:49.488: INFO: Number of running nodes: 0, number of available pods: 0 May 19 21:25:49.490: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1567/daemonsets","resourceVersion":"17528625"},"items":null} May 19 21:25:49.492: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1567/pods","resourceVersion":"17528625"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:25:49.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1567" for this suite. • [SLOW TEST:24.334 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":78,"skipped":1513,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:25:49.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 19 21:25:54.144: INFO: Successfully updated pod "annotationupdatecf83a4b9-dffa-4f8e-824e-66028f4ddaf9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:25:56.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-592" for this suite. • [SLOW TEST:6.662 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:25:56.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-gs6v STEP: Creating a pod to test atomic-volume-subpath May 19 21:25:56.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gs6v" in namespace "subpath-3805" to be "success or failure" May 19 21:25:56.456: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 27.932194ms May 19 21:25:58.460: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031962541s May 19 21:26:00.464: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 4.035450464s May 19 21:26:02.542: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 6.114333656s May 19 21:26:04.546: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 8.11794169s May 19 21:26:06.550: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 10.12216225s May 19 21:26:08.555: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 12.126686813s May 19 21:26:10.559: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 14.131216167s May 19 21:26:12.563: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 16.134653451s May 19 21:26:14.568: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 18.139529291s May 19 21:26:16.571: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 20.143135093s May 19 21:26:18.575: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Running", Reason="", readiness=true. Elapsed: 22.146857935s May 19 21:26:20.584: INFO: Pod "pod-subpath-test-configmap-gs6v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.15590159s STEP: Saw pod success May 19 21:26:20.584: INFO: Pod "pod-subpath-test-configmap-gs6v" satisfied condition "success or failure" May 19 21:26:20.587: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-gs6v container test-container-subpath-configmap-gs6v: STEP: delete the pod May 19 21:26:20.627: INFO: Waiting for pod pod-subpath-test-configmap-gs6v to disappear May 19 21:26:20.678: INFO: Pod pod-subpath-test-configmap-gs6v no longer exists STEP: Deleting pod pod-subpath-test-configmap-gs6v May 19 21:26:20.678: INFO: Deleting pod "pod-subpath-test-configmap-gs6v" in namespace "subpath-3805" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:26:20.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3805" for this suite. • [SLOW TEST:24.719 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":80,"skipped":1535,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:26:20.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-xkcj STEP: Creating a pod to test atomic-volume-subpath May 19 21:26:21.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xkcj" in namespace "subpath-4779" to be "success or failure" May 19 21:26:21.358: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Pending", Reason="", readiness=false. Elapsed: 79.646981ms May 19 21:26:23.362: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083554557s May 19 21:26:25.366: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 4.087441755s May 19 21:26:27.371: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 6.092116585s May 19 21:26:29.375: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 8.096119586s May 19 21:26:31.379: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 10.100753824s May 19 21:26:33.383: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 12.10492018s May 19 21:26:35.388: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 14.10928252s May 19 21:26:37.392: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 16.113115944s May 19 21:26:39.396: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 18.11741845s May 19 21:26:41.400: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 20.121328437s May 19 21:26:43.404: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Running", Reason="", readiness=true. Elapsed: 22.125795184s May 19 21:26:45.409: INFO: Pod "pod-subpath-test-secret-xkcj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130614758s STEP: Saw pod success May 19 21:26:45.409: INFO: Pod "pod-subpath-test-secret-xkcj" satisfied condition "success or failure" May 19 21:26:45.412: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-xkcj container test-container-subpath-secret-xkcj: STEP: delete the pod May 19 21:26:45.434: INFO: Waiting for pod pod-subpath-test-secret-xkcj to disappear May 19 21:26:45.438: INFO: Pod pod-subpath-test-secret-xkcj no longer exists STEP: Deleting pod pod-subpath-test-secret-xkcj May 19 21:26:45.438: INFO: Deleting pod "pod-subpath-test-secret-xkcj" in namespace "subpath-4779" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:26:45.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4779" for this suite. • [SLOW TEST:24.560 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":81,"skipped":1550,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:26:45.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:26:45.521: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-608403c3-3c96-44b1-8c21-2f215ec88290" in namespace "security-context-test-1014" to be "success or failure" May 19 21:26:45.535: INFO: Pod "busybox-readonly-false-608403c3-3c96-44b1-8c21-2f215ec88290": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092662ms May 19 21:26:47.538: INFO: Pod "busybox-readonly-false-608403c3-3c96-44b1-8c21-2f215ec88290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01711202s May 19 21:26:49.543: INFO: Pod "busybox-readonly-false-608403c3-3c96-44b1-8c21-2f215ec88290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021609114s May 19 21:26:49.543: INFO: Pod "busybox-readonly-false-608403c3-3c96-44b1-8c21-2f215ec88290" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:26:49.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1014" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1551,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:26:49.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-2843e04e-47aa-4dbf-bb85-1474f7dab596 STEP: Creating a pod to test consume secrets May 19 21:26:49.646: INFO: Waiting up to 5m0s for pod "pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08" in namespace "secrets-4501" to be "success or failure" May 19 21:26:49.649: INFO: Pod "pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231049ms May 19 21:26:51.654: INFO: Pod "pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007473626s May 19 21:26:53.657: INFO: Pod "pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011044347s STEP: Saw pod success May 19 21:26:53.657: INFO: Pod "pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08" satisfied condition "success or failure" May 19 21:26:53.659: INFO: Trying to get logs from node jerma-worker pod pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08 container secret-volume-test: STEP: delete the pod May 19 21:26:53.680: INFO: Waiting for pod pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08 to disappear May 19 21:26:53.714: INFO: Pod pod-secrets-71bbecca-ff62-4ce9-9574-c180eaa00f08 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:26:53.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4501" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:26:53.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8495 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8495 STEP: creating replication controller externalsvc in namespace services-8495 I0519 21:26:53.931466 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8495, replica count: 2 I0519 21:26:56.981794 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:26:59.981976 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 19 21:27:00.062: INFO: Creating new exec pod May 19 21:27:04.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8495 execpodn9778 -- /bin/sh -x -c nslookup nodeport-service' May 19 21:27:07.326: INFO: stderr: "I0519 21:27:07.210214 1158 log.go:172] (0xc000104b00) (0xc00077f040) Create stream\nI0519 21:27:07.210299 1158 log.go:172] (0xc000104b00) (0xc00077f040) Stream added, broadcasting: 1\nI0519 21:27:07.213915 1158 log.go:172] (0xc000104b00) Reply frame received for 1\nI0519 21:27:07.213992 1158 log.go:172] (0xc000104b00) (0xc00077f0e0) Create stream\nI0519 21:27:07.214010 1158 log.go:172] (0xc000104b00) (0xc00077f0e0) Stream added, broadcasting: 3\nI0519 21:27:07.215224 1158 log.go:172] (0xc000104b00) Reply frame received for 3\nI0519 21:27:07.215253 1158 log.go:172] (0xc000104b00) (0xc00077f180) Create stream\nI0519 21:27:07.215266 1158 log.go:172] (0xc000104b00) (0xc00077f180) Stream added, broadcasting: 5\nI0519 21:27:07.216300 1158 log.go:172] (0xc000104b00) Reply frame received for 5\nI0519 21:27:07.309591 1158 log.go:172] (0xc000104b00) Data frame received for 5\nI0519 21:27:07.309619 1158 log.go:172] (0xc00077f180) (5) Data frame handling\nI0519 21:27:07.309637 1158 log.go:172] (0xc00077f180) (5) Data frame sent\n+ nslookup nodeport-service\nI0519 21:27:07.317101 1158 log.go:172] (0xc000104b00) Data frame received for 3\nI0519 21:27:07.317233 1158 log.go:172] (0xc00077f0e0) (3) Data frame handling\nI0519 21:27:07.317246 1158 log.go:172] (0xc00077f0e0) (3) Data frame sent\nI0519 21:27:07.318349 1158 log.go:172] (0xc000104b00) Data frame received for 3\nI0519 21:27:07.318361 1158 log.go:172] (0xc00077f0e0) (3) Data frame handling\nI0519 21:27:07.318367 1158 log.go:172] (0xc00077f0e0) (3) Data frame sent\nI0519 21:27:07.318949 1158 log.go:172] (0xc000104b00) Data frame received for 3\nI0519 21:27:07.318999 1158 log.go:172] (0xc00077f0e0) (3) Data frame handling\nI0519 21:27:07.319081 1158 log.go:172] (0xc000104b00) Data frame received for 5\nI0519 21:27:07.319098 1158 log.go:172] (0xc00077f180) (5) Data frame handling\nI0519 21:27:07.320605 1158 log.go:172] (0xc000104b00) Data frame received for 1\nI0519 21:27:07.320688 1158 log.go:172] (0xc00077f040) (1) Data frame handling\nI0519 21:27:07.320729 1158 log.go:172] (0xc00077f040) (1) Data frame sent\nI0519 21:27:07.320755 1158 log.go:172] (0xc000104b00) (0xc00077f040) Stream removed, broadcasting: 1\nI0519 21:27:07.320780 1158 log.go:172] (0xc000104b00) Go away received\nI0519 21:27:07.321382 1158 log.go:172] (0xc000104b00) (0xc00077f040) Stream removed, broadcasting: 1\nI0519 21:27:07.321407 1158 log.go:172] (0xc000104b00) (0xc00077f0e0) Stream removed, broadcasting: 3\nI0519 21:27:07.321420 1158 log.go:172] (0xc000104b00) (0xc00077f180) Stream removed, broadcasting: 5\n" May 19 21:27:07.326: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8495.svc.cluster.local\tcanonical name = externalsvc.services-8495.svc.cluster.local.\nName:\texternalsvc.services-8495.svc.cluster.local\nAddress: 10.106.225.85\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8495, will wait for the garbage collector to delete the pods May 19 21:27:07.387: INFO: Deleting ReplicationController externalsvc took: 6.582477ms May 19 21:27:07.687: INFO: Terminating ReplicationController externalsvc pods took: 300.250183ms May 19 21:27:19.327: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:27:19.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8495" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.677 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":84,"skipped":1592,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:27:19.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:27:53.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4805" for this suite. • [SLOW TEST:33.625 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1608,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:27:53.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:27:53.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:27:53.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7719" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":86,"skipped":1613,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:27:53.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:09.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-399" for this suite. STEP: Destroying namespace "nsdeletetest-5609" for this suite. May 19 21:28:09.214: INFO: Namespace nsdeletetest-5609 was already deleted STEP: Destroying namespace "nsdeletetest-1123" for this suite. • [SLOW TEST:15.481 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":87,"skipped":1617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:09.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 19 21:28:13.307: INFO: Pod pod-hostip-5ac8f4fe-ba0f-4afc-9a2d-42a9391dc5b1 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:13.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-968" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1650,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:13.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3830" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":89,"skipped":1659,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:13.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 19 21:28:18.637: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:18.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-373" for this suite. • [SLOW TEST:5.636 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":90,"skipped":1664,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:19.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:28:19.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb" in namespace "projected-9259" to be "success or failure" May 19 21:28:19.503: INFO: Pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb": Phase="Pending", Reason="", readiness=false. Elapsed: 71.280358ms May 19 21:28:21.507: INFO: Pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076084521s May 19 21:28:23.512: INFO: Pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb": Phase="Running", Reason="", readiness=true. Elapsed: 4.080298531s May 19 21:28:25.516: INFO: Pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0844532s STEP: Saw pod success May 19 21:28:25.516: INFO: Pod "downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb" satisfied condition "success or failure" May 19 21:28:25.519: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb container client-container: STEP: delete the pod May 19 21:28:25.582: INFO: Waiting for pod downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb to disappear May 19 21:28:25.599: INFO: Pod downwardapi-volume-25c58c3c-716c-4456-a228-2a19fa5cfecb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:25.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9259" for this suite. • [SLOW TEST:6.527 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:25.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 19 21:28:25.691: INFO: Waiting up to 5m0s for pod "pod-3436131d-ddb9-4e0d-90f6-4b47e035782b" in namespace "emptydir-3154" to be "success or failure" May 19 21:28:25.700: INFO: Pod "pod-3436131d-ddb9-4e0d-90f6-4b47e035782b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.360773ms May 19 21:28:27.737: INFO: Pod "pod-3436131d-ddb9-4e0d-90f6-4b47e035782b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046272813s May 19 21:28:29.743: INFO: Pod "pod-3436131d-ddb9-4e0d-90f6-4b47e035782b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052365905s STEP: Saw pod success May 19 21:28:29.743: INFO: Pod "pod-3436131d-ddb9-4e0d-90f6-4b47e035782b" satisfied condition "success or failure" May 19 21:28:29.745: INFO: Trying to get logs from node jerma-worker pod pod-3436131d-ddb9-4e0d-90f6-4b47e035782b container test-container: STEP: delete the pod May 19 21:28:29.811: INFO: Waiting for pod pod-3436131d-ddb9-4e0d-90f6-4b47e035782b to disappear May 19 21:28:29.844: INFO: Pod pod-3436131d-ddb9-4e0d-90f6-4b47e035782b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:29.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3154" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1686,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:29.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 19 21:28:30.086: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:30.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7630" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":93,"skipped":1699,"failed":0} ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:30.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:28:30.260: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8540 I0519 21:28:30.309829 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8540, replica count: 1 I0519 21:28:31.360171 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:28:32.360380 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:28:33.360619 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:28:34.360852 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 21:28:34.521: INFO: Created: latency-svc-5mrnz May 19 21:28:34.541: INFO: Got endpoints: latency-svc-5mrnz [80.283972ms] May 19 21:28:34.670: INFO: Created: latency-svc-v2zxg May 19 21:28:34.714: INFO: Got endpoints: latency-svc-v2zxg [173.482724ms] May 19 21:28:34.715: INFO: Created: latency-svc-5gcw7 May 19 21:28:34.726: INFO: Got endpoints: latency-svc-5gcw7 [184.287145ms] May 19 21:28:34.832: INFO: Created: latency-svc-m4v27 May 19 21:28:34.835: INFO: Got endpoints: latency-svc-m4v27 [294.020015ms] May 19 21:28:34.864: INFO: Created: latency-svc-kp5j9 May 19 21:28:34.882: INFO: Got endpoints: latency-svc-kp5j9 [341.251151ms] May 19 21:28:34.905: INFO: Created: latency-svc-xnvsk May 19 21:28:34.926: INFO: Got endpoints: latency-svc-xnvsk [384.954978ms] May 19 21:28:34.984: INFO: Created: latency-svc-x5l6d May 19 21:28:34.998: INFO: Got endpoints: latency-svc-x5l6d [456.780163ms] May 19 21:28:35.044: INFO: Created: latency-svc-6fdvt May 19 21:28:35.057: INFO: Got endpoints: latency-svc-6fdvt [516.323577ms] May 19 21:28:35.141: INFO: Created: latency-svc-d66rf May 19 21:28:35.145: INFO: Got endpoints: latency-svc-d66rf [603.664345ms] May 19 21:28:35.194: INFO: Created: latency-svc-pxdh7 May 19 21:28:35.202: INFO: Got endpoints: latency-svc-pxdh7 [661.136163ms] May 19 21:28:35.232: INFO: Created: latency-svc-v6w9j May 19 21:28:35.275: INFO: Got endpoints: latency-svc-v6w9j [733.787711ms] May 19 21:28:35.302: INFO: Created: latency-svc-g457v May 19 21:28:35.332: INFO: Got endpoints: latency-svc-g457v [790.764203ms] May 19 21:28:35.362: INFO: Created: latency-svc-mfr4x May 19 21:28:35.400: INFO: Got endpoints: latency-svc-mfr4x [859.010415ms] May 19 21:28:35.428: INFO: Created: latency-svc-gkc5z May 19 21:28:35.444: INFO: Got endpoints: latency-svc-gkc5z [902.311421ms] May 19 21:28:35.490: INFO: Created: latency-svc-txpwc May 19 21:28:35.532: INFO: Got endpoints: latency-svc-txpwc [990.937724ms] May 19 21:28:35.560: INFO: Created: latency-svc-mqlf8 May 19 21:28:35.583: INFO: Got endpoints: latency-svc-mqlf8 [1.042030728s] May 19 21:28:35.632: INFO: Created: latency-svc-6tbnf May 19 21:28:35.670: INFO: Got endpoints: latency-svc-6tbnf [955.807694ms] May 19 21:28:35.698: INFO: Created: latency-svc-m7msn May 19 21:28:35.715: INFO: Got endpoints: latency-svc-m7msn [989.684661ms] May 19 21:28:35.751: INFO: Created: latency-svc-bzqtk May 19 21:28:35.802: INFO: Got endpoints: latency-svc-bzqtk [966.473401ms] May 19 21:28:35.835: INFO: Created: latency-svc-nqwzb May 19 21:28:35.855: INFO: Got endpoints: latency-svc-nqwzb [972.065768ms] May 19 21:28:35.902: INFO: Created: latency-svc-rk4ng May 19 21:28:35.952: INFO: Got endpoints: latency-svc-rk4ng [1.02563783s] May 19 21:28:35.980: INFO: Created: latency-svc-c9wjt May 19 21:28:35.999: INFO: Got endpoints: latency-svc-c9wjt [1.001362337s] May 19 21:28:36.028: INFO: Created: latency-svc-twffl May 19 21:28:36.072: INFO: Got endpoints: latency-svc-twffl [1.014531173s] May 19 21:28:36.124: INFO: Created: latency-svc-fjwvb May 19 21:28:36.137: INFO: Got endpoints: latency-svc-fjwvb [992.432681ms] May 19 21:28:36.221: INFO: Created: latency-svc-r4tvr May 19 21:28:36.246: INFO: Got endpoints: latency-svc-r4tvr [1.044026178s] May 19 21:28:36.268: INFO: Created: latency-svc-rtnlw May 19 21:28:36.282: INFO: Got endpoints: latency-svc-rtnlw [1.007434945s] May 19 21:28:36.312: INFO: Created: latency-svc-mwjs9 May 19 21:28:36.360: INFO: Got endpoints: latency-svc-mwjs9 [1.027758123s] May 19 21:28:36.394: INFO: Created: latency-svc-mqrhm May 19 21:28:36.417: INFO: Got endpoints: latency-svc-mqrhm [1.016607767s] May 19 21:28:36.508: INFO: Created: latency-svc-vnhhc May 19 21:28:36.512: INFO: Got endpoints: latency-svc-vnhhc [1.068173662s] May 19 21:28:36.538: INFO: Created: latency-svc-csklb May 19 21:28:36.560: INFO: Got endpoints: latency-svc-csklb [1.027888969s] May 19 21:28:36.593: INFO: Created: latency-svc-kgdlx May 19 21:28:36.609: INFO: Got endpoints: latency-svc-kgdlx [1.025420964s] May 19 21:28:36.724: INFO: Created: latency-svc-srvwh May 19 21:28:36.740: INFO: Got endpoints: latency-svc-srvwh [1.069656258s] May 19 21:28:36.802: INFO: Created: latency-svc-vbb72 May 19 21:28:36.807: INFO: Got endpoints: latency-svc-vbb72 [1.091621358s] May 19 21:28:36.850: INFO: Created: latency-svc-wwb59 May 19 21:28:36.874: INFO: Got endpoints: latency-svc-wwb59 [1.071943745s] May 19 21:28:36.990: INFO: Created: latency-svc-dhlhw May 19 21:28:37.023: INFO: Got endpoints: latency-svc-dhlhw [1.168611689s] May 19 21:28:37.024: INFO: Created: latency-svc-hb6bm May 19 21:28:37.041: INFO: Got endpoints: latency-svc-hb6bm [1.089767616s] May 19 21:28:37.107: INFO: Created: latency-svc-6dlkk May 19 21:28:37.110: INFO: Got endpoints: latency-svc-6dlkk [1.111192661s] May 19 21:28:37.180: INFO: Created: latency-svc-xqqxw May 19 21:28:37.198: INFO: Got endpoints: latency-svc-xqqxw [1.126139365s] May 19 21:28:37.264: INFO: Created: latency-svc-xf5bg May 19 21:28:37.277: INFO: Got endpoints: latency-svc-xf5bg [1.139401582s] May 19 21:28:37.324: INFO: Created: latency-svc-zns7w May 19 21:28:37.344: INFO: Got endpoints: latency-svc-zns7w [1.097458902s] May 19 21:28:37.419: INFO: Created: latency-svc-g4v2w May 19 21:28:37.458: INFO: Got endpoints: latency-svc-g4v2w [1.175126264s] May 19 21:28:37.538: INFO: Created: latency-svc-zc5g2 May 19 21:28:37.542: INFO: Got endpoints: latency-svc-zc5g2 [1.182473108s] May 19 21:28:37.576: INFO: Created: latency-svc-mf9l4 May 19 21:28:37.596: INFO: Got endpoints: latency-svc-mf9l4 [1.178698514s] May 19 21:28:37.630: INFO: Created: latency-svc-m5znw May 19 21:28:37.688: INFO: Got endpoints: latency-svc-m5znw [1.176190926s] May 19 21:28:37.726: INFO: Created: latency-svc-tk7m2 May 19 21:28:37.735: INFO: Got endpoints: latency-svc-tk7m2 [1.174256012s] May 19 21:28:37.850: INFO: Created: latency-svc-82tmn May 19 21:28:37.852: INFO: Got endpoints: latency-svc-82tmn [1.243627413s] May 19 21:28:37.881: INFO: Created: latency-svc-4cc5c May 19 21:28:37.911: INFO: Got endpoints: latency-svc-4cc5c [1.170769421s] May 19 21:28:38.006: INFO: Created: latency-svc-spz5j May 19 21:28:38.043: INFO: Got endpoints: latency-svc-spz5j [1.236115032s] May 19 21:28:38.043: INFO: Created: latency-svc-2d944 May 19 21:28:38.060: INFO: Got endpoints: latency-svc-2d944 [1.186158494s] May 19 21:28:38.097: INFO: Created: latency-svc-z4qww May 19 21:28:38.174: INFO: Got endpoints: latency-svc-z4qww [1.150398542s] May 19 21:28:38.217: INFO: Created: latency-svc-8cx6q May 19 21:28:38.247: INFO: Got endpoints: latency-svc-8cx6q [1.205670047s] May 19 21:28:38.317: INFO: Created: latency-svc-k5sbc May 19 21:28:38.343: INFO: Created: latency-svc-h7c8v May 19 21:28:38.343: INFO: Got endpoints: latency-svc-k5sbc [1.232811298s] May 19 21:28:38.354: INFO: Got endpoints: latency-svc-h7c8v [1.15591508s] May 19 21:28:38.386: INFO: Created: latency-svc-5ff6x May 19 21:28:38.403: INFO: Got endpoints: latency-svc-5ff6x [1.126207042s] May 19 21:28:38.502: INFO: Created: latency-svc-jqz7s May 19 21:28:38.511: INFO: Got endpoints: latency-svc-jqz7s [1.167198053s] May 19 21:28:38.535: INFO: Created: latency-svc-rkmvf May 19 21:28:38.566: INFO: Got endpoints: latency-svc-rkmvf [1.108325134s] May 19 21:28:38.595: INFO: Created: latency-svc-t48v8 May 19 21:28:38.682: INFO: Got endpoints: latency-svc-t48v8 [1.13991294s] May 19 21:28:38.684: INFO: Created: latency-svc-99kn4 May 19 21:28:38.723: INFO: Got endpoints: latency-svc-99kn4 [1.126917736s] May 19 21:28:38.757: INFO: Created: latency-svc-6dm8z May 19 21:28:38.771: INFO: Got endpoints: latency-svc-6dm8z [1.082895134s] May 19 21:28:38.839: INFO: Created: latency-svc-s5vkj May 19 21:28:38.843: INFO: Got endpoints: latency-svc-s5vkj [1.107915202s] May 19 21:28:38.871: INFO: Created: latency-svc-g4pgh May 19 21:28:38.888: INFO: Got endpoints: latency-svc-g4pgh [1.035046376s] May 19 21:28:38.913: INFO: Created: latency-svc-sfznj May 19 21:28:39.011: INFO: Got endpoints: latency-svc-sfznj [1.100255507s] May 19 21:28:39.039: INFO: Created: latency-svc-2hxgv May 19 21:28:39.078: INFO: Got endpoints: latency-svc-2hxgv [1.034831996s] May 19 21:28:39.111: INFO: Created: latency-svc-dkf6z May 19 21:28:39.162: INFO: Got endpoints: latency-svc-dkf6z [1.101442882s] May 19 21:28:39.189: INFO: Created: latency-svc-g6m42 May 19 21:28:39.213: INFO: Got endpoints: latency-svc-g6m42 [1.039566707s] May 19 21:28:39.243: INFO: Created: latency-svc-pn85f May 19 21:28:39.311: INFO: Got endpoints: latency-svc-pn85f [1.06368486s] May 19 21:28:39.357: INFO: Created: latency-svc-9zn85 May 19 21:28:39.373: INFO: Got endpoints: latency-svc-9zn85 [1.029918247s] May 19 21:28:39.405: INFO: Created: latency-svc-b6vtt May 19 21:28:39.449: INFO: Got endpoints: latency-svc-b6vtt [1.094556787s] May 19 21:28:39.489: INFO: Created: latency-svc-sdxzs May 19 21:28:39.506: INFO: Got endpoints: latency-svc-sdxzs [1.102844797s] May 19 21:28:39.526: INFO: Created: latency-svc-gp2x6 May 19 21:28:39.548: INFO: Got endpoints: latency-svc-gp2x6 [1.036965199s] May 19 21:28:39.633: INFO: Created: latency-svc-v5ntt May 19 21:28:39.651: INFO: Got endpoints: latency-svc-v5ntt [1.084641061s] May 19 21:28:39.675: INFO: Created: latency-svc-fhwqc May 19 21:28:39.693: INFO: Got endpoints: latency-svc-fhwqc [1.011300172s] May 19 21:28:39.796: INFO: Created: latency-svc-ltqr5 May 19 21:28:39.800: INFO: Got endpoints: latency-svc-ltqr5 [1.077666721s] May 19 21:28:39.861: INFO: Created: latency-svc-vm9xq May 19 21:28:39.880: INFO: Got endpoints: latency-svc-vm9xq [1.109125161s] May 19 21:28:39.955: INFO: Created: latency-svc-rwd8l May 19 21:28:39.959: INFO: Got endpoints: latency-svc-rwd8l [1.116503581s] May 19 21:28:39.999: INFO: Created: latency-svc-snqfz May 19 21:28:40.018: INFO: Got endpoints: latency-svc-snqfz [1.1303987s] May 19 21:28:40.041: INFO: Created: latency-svc-22fz2 May 19 21:28:40.095: INFO: Got endpoints: latency-svc-22fz2 [1.083858704s] May 19 21:28:40.100: INFO: Created: latency-svc-67wzs May 19 21:28:40.115: INFO: Got endpoints: latency-svc-67wzs [1.036488974s] May 19 21:28:40.143: INFO: Created: latency-svc-hxx42 May 19 21:28:40.163: INFO: Got endpoints: latency-svc-hxx42 [1.001638354s] May 19 21:28:40.192: INFO: Created: latency-svc-88gsb May 19 21:28:40.245: INFO: Got endpoints: latency-svc-88gsb [1.031830049s] May 19 21:28:40.268: INFO: Created: latency-svc-jgx9n May 19 21:28:40.278: INFO: Got endpoints: latency-svc-jgx9n [966.636931ms] May 19 21:28:40.323: INFO: Created: latency-svc-krgnz May 19 21:28:40.338: INFO: Got endpoints: latency-svc-krgnz [964.811221ms] May 19 21:28:40.389: INFO: Created: latency-svc-cnhtk May 19 21:28:40.399: INFO: Got endpoints: latency-svc-cnhtk [950.375704ms] May 19 21:28:40.430: INFO: Created: latency-svc-qzkcc May 19 21:28:40.485: INFO: Got endpoints: latency-svc-qzkcc [979.004298ms] May 19 21:28:40.544: INFO: Created: latency-svc-4q4g5 May 19 21:28:40.555: INFO: Got endpoints: latency-svc-4q4g5 [1.006868497s] May 19 21:28:40.587: INFO: Created: latency-svc-z9l87 May 19 21:28:40.604: INFO: Got endpoints: latency-svc-z9l87 [953.419442ms] May 19 21:28:40.641: INFO: Created: latency-svc-stxxq May 19 21:28:40.706: INFO: Got endpoints: latency-svc-stxxq [1.012619027s] May 19 21:28:40.767: INFO: Created: latency-svc-7kbp9 May 19 21:28:40.784: INFO: Got endpoints: latency-svc-7kbp9 [983.873784ms] May 19 21:28:40.922: INFO: Created: latency-svc-xfdmn May 19 21:28:40.925: INFO: Got endpoints: latency-svc-xfdmn [1.044733543s] May 19 21:28:40.959: INFO: Created: latency-svc-9cdzd May 19 21:28:40.977: INFO: Got endpoints: latency-svc-9cdzd [1.017522672s] May 19 21:28:41.096: INFO: Created: latency-svc-zh82z May 19 21:28:41.099: INFO: Got endpoints: latency-svc-zh82z [1.080558729s] May 19 21:28:41.233: INFO: Created: latency-svc-fnmhn May 19 21:28:41.241: INFO: Got endpoints: latency-svc-fnmhn [1.14610246s] May 19 21:28:41.277: INFO: Created: latency-svc-h4vxz May 19 21:28:41.320: INFO: Got endpoints: latency-svc-h4vxz [1.204864945s] May 19 21:28:41.407: INFO: Created: latency-svc-ssqsq May 19 21:28:41.417: INFO: Got endpoints: latency-svc-ssqsq [1.253873309s] May 19 21:28:41.493: INFO: Created: latency-svc-bxpqg May 19 21:28:41.538: INFO: Got endpoints: latency-svc-bxpqg [1.293209157s] May 19 21:28:41.552: INFO: Created: latency-svc-pqzk5 May 19 21:28:41.567: INFO: Got endpoints: latency-svc-pqzk5 [1.289272867s] May 19 21:28:41.588: INFO: Created: latency-svc-7z8tt May 19 21:28:41.609: INFO: Got endpoints: latency-svc-7z8tt [1.270741788s] May 19 21:28:41.637: INFO: Created: latency-svc-j9czh May 19 21:28:41.688: INFO: Got endpoints: latency-svc-j9czh [1.288503018s] May 19 21:28:41.733: INFO: Created: latency-svc-xzh25 May 19 21:28:41.747: INFO: Got endpoints: latency-svc-xzh25 [1.262206437s] May 19 21:28:41.775: INFO: Created: latency-svc-9clnh May 19 21:28:41.820: INFO: Got endpoints: latency-svc-9clnh [1.264558733s] May 19 21:28:41.835: INFO: Created: latency-svc-d65x7 May 19 21:28:41.851: INFO: Got endpoints: latency-svc-d65x7 [1.24637462s] May 19 21:28:41.884: INFO: Created: latency-svc-4nvmj May 19 21:28:41.898: INFO: Got endpoints: latency-svc-4nvmj [1.192148389s] May 19 21:28:41.959: INFO: Created: latency-svc-626hj May 19 21:28:41.960: INFO: Got endpoints: latency-svc-626hj [1.176030976s] May 19 21:28:42.034: INFO: Created: latency-svc-b6b2m May 19 21:28:42.046: INFO: Got endpoints: latency-svc-b6b2m [1.120967044s] May 19 21:28:42.114: INFO: Created: latency-svc-ssdkt May 19 21:28:42.117: INFO: Got endpoints: latency-svc-ssdkt [1.140291227s] May 19 21:28:42.140: INFO: Created: latency-svc-wl278 May 19 21:28:42.155: INFO: Got endpoints: latency-svc-wl278 [1.056088487s] May 19 21:28:42.182: INFO: Created: latency-svc-97mx7 May 19 21:28:42.192: INFO: Got endpoints: latency-svc-97mx7 [950.433095ms] May 19 21:28:42.213: INFO: Created: latency-svc-4ks6v May 19 21:28:42.251: INFO: Got endpoints: latency-svc-4ks6v [931.512218ms] May 19 21:28:42.266: INFO: Created: latency-svc-m5zgc May 19 21:28:42.283: INFO: Got endpoints: latency-svc-m5zgc [865.373095ms] May 19 21:28:42.322: INFO: Created: latency-svc-vzknk May 19 21:28:42.338: INFO: Got endpoints: latency-svc-vzknk [799.770204ms] May 19 21:28:42.416: INFO: Created: latency-svc-5kj75 May 19 21:28:42.433: INFO: Got endpoints: latency-svc-5kj75 [866.527039ms] May 19 21:28:42.464: INFO: Created: latency-svc-l6f6l May 19 21:28:42.481: INFO: Got endpoints: latency-svc-l6f6l [872.501981ms] May 19 21:28:42.568: INFO: Created: latency-svc-gt6sc May 19 21:28:42.572: INFO: Got endpoints: latency-svc-gt6sc [883.875874ms] May 19 21:28:42.638: INFO: Created: latency-svc-jldjb May 19 21:28:42.650: INFO: Got endpoints: latency-svc-jldjb [902.556855ms] May 19 21:28:42.715: INFO: Created: latency-svc-2gjq5 May 19 21:28:42.715: INFO: Got endpoints: latency-svc-2gjq5 [895.118475ms] May 19 21:28:42.868: INFO: Created: latency-svc-cp88k May 19 21:28:42.874: INFO: Got endpoints: latency-svc-cp88k [1.022917682s] May 19 21:28:42.932: INFO: Created: latency-svc-m8z92 May 19 21:28:42.952: INFO: Got endpoints: latency-svc-m8z92 [1.053190741s] May 19 21:28:43.006: INFO: Created: latency-svc-sw6nw May 19 21:28:43.011: INFO: Got endpoints: latency-svc-sw6nw [1.050269684s] May 19 21:28:43.064: INFO: Created: latency-svc-lpgpp May 19 21:28:43.084: INFO: Got endpoints: latency-svc-lpgpp [1.037827898s] May 19 21:28:43.161: INFO: Created: latency-svc-fkdg8 May 19 21:28:43.164: INFO: Got endpoints: latency-svc-fkdg8 [1.047014558s] May 19 21:28:43.220: INFO: Created: latency-svc-vrpc2 May 19 21:28:43.234: INFO: Got endpoints: latency-svc-vrpc2 [1.079500018s] May 19 21:28:43.258: INFO: Created: latency-svc-7vfvx May 19 21:28:43.311: INFO: Got endpoints: latency-svc-7vfvx [1.118779127s] May 19 21:28:43.316: INFO: Created: latency-svc-wknxf May 19 21:28:43.331: INFO: Got endpoints: latency-svc-wknxf [1.079914196s] May 19 21:28:43.361: INFO: Created: latency-svc-jk9s4 May 19 21:28:43.374: INFO: Got endpoints: latency-svc-jk9s4 [1.090947848s] May 19 21:28:43.400: INFO: Created: latency-svc-msfhl May 19 21:28:43.436: INFO: Got endpoints: latency-svc-msfhl [1.097991181s] May 19 21:28:43.480: INFO: Created: latency-svc-bsgm2 May 19 21:28:43.494: INFO: Got endpoints: latency-svc-bsgm2 [1.060624543s] May 19 21:28:43.520: INFO: Created: latency-svc-q5hd2 May 19 21:28:43.531: INFO: Got endpoints: latency-svc-q5hd2 [1.049143501s] May 19 21:28:43.569: INFO: Created: latency-svc-wv4pz May 19 21:28:43.591: INFO: Got endpoints: latency-svc-wv4pz [1.019013165s] May 19 21:28:43.617: INFO: Created: latency-svc-9fdnh May 19 21:28:43.634: INFO: Got endpoints: latency-svc-9fdnh [983.627614ms] May 19 21:28:43.719: INFO: Created: latency-svc-lwdt4 May 19 21:28:43.731: INFO: Got endpoints: latency-svc-lwdt4 [1.015734581s] May 19 21:28:43.755: INFO: Created: latency-svc-82qh2 May 19 21:28:43.766: INFO: Got endpoints: latency-svc-82qh2 [892.461599ms] May 19 21:28:43.790: INFO: Created: latency-svc-v56rm May 19 21:28:43.809: INFO: Got endpoints: latency-svc-v56rm [857.7906ms] May 19 21:28:43.851: INFO: Created: latency-svc-s5l28 May 19 21:28:43.852: INFO: Got endpoints: latency-svc-s5l28 [841.471042ms] May 19 21:28:43.904: INFO: Created: latency-svc-p82w9 May 19 21:28:43.923: INFO: Got endpoints: latency-svc-p82w9 [839.398631ms] May 19 21:28:43.947: INFO: Created: latency-svc-xwnpr May 19 21:28:43.981: INFO: Got endpoints: latency-svc-xwnpr [817.244646ms] May 19 21:28:44.006: INFO: Created: latency-svc-8qnsx May 19 21:28:44.020: INFO: Got endpoints: latency-svc-8qnsx [785.939009ms] May 19 21:28:44.042: INFO: Created: latency-svc-s2bb5 May 19 21:28:44.057: INFO: Got endpoints: latency-svc-s2bb5 [746.274834ms] May 19 21:28:44.078: INFO: Created: latency-svc-g276f May 19 21:28:44.131: INFO: Got endpoints: latency-svc-g276f [800.430196ms] May 19 21:28:44.145: INFO: Created: latency-svc-gftgt May 19 21:28:44.154: INFO: Got endpoints: latency-svc-gftgt [779.9703ms] May 19 21:28:44.175: INFO: Created: latency-svc-z2ttv May 19 21:28:44.184: INFO: Got endpoints: latency-svc-z2ttv [747.506609ms] May 19 21:28:44.210: INFO: Created: latency-svc-nkmxq May 19 21:28:44.270: INFO: Got endpoints: latency-svc-nkmxq [775.673427ms] May 19 21:28:44.294: INFO: Created: latency-svc-dd6j6 May 19 21:28:44.311: INFO: Got endpoints: latency-svc-dd6j6 [780.211577ms] May 19 21:28:44.342: INFO: Created: latency-svc-mvwgv May 19 21:28:44.359: INFO: Got endpoints: latency-svc-mvwgv [768.053872ms] May 19 21:28:44.413: INFO: Created: latency-svc-pfsg2 May 19 21:28:44.426: INFO: Got endpoints: latency-svc-pfsg2 [792.78147ms] May 19 21:28:44.463: INFO: Created: latency-svc-88vz2 May 19 21:28:44.480: INFO: Got endpoints: latency-svc-88vz2 [749.191365ms] May 19 21:28:44.505: INFO: Created: latency-svc-gbh7b May 19 21:28:44.575: INFO: Got endpoints: latency-svc-gbh7b [808.526923ms] May 19 21:28:44.601: INFO: Created: latency-svc-f9qhf May 19 21:28:44.612: INFO: Got endpoints: latency-svc-f9qhf [803.044145ms] May 19 21:28:44.648: INFO: Created: latency-svc-8rw78 May 19 21:28:44.667: INFO: Got endpoints: latency-svc-8rw78 [814.715361ms] May 19 21:28:44.724: INFO: Created: latency-svc-xg5hw May 19 21:28:44.734: INFO: Got endpoints: latency-svc-xg5hw [810.318918ms] May 19 21:28:44.762: INFO: Created: latency-svc-bf498 May 19 21:28:44.788: INFO: Got endpoints: latency-svc-bf498 [806.20961ms] May 19 21:28:44.886: INFO: Created: latency-svc-928kk May 19 21:28:44.896: INFO: Got endpoints: latency-svc-928kk [875.296781ms] May 19 21:28:44.920: INFO: Created: latency-svc-9qjkn May 19 21:28:44.932: INFO: Got endpoints: latency-svc-9qjkn [875.250079ms] May 19 21:28:44.954: INFO: Created: latency-svc-zgq7l May 19 21:28:45.059: INFO: Created: latency-svc-ws5wm May 19 21:28:45.059: INFO: Got endpoints: latency-svc-zgq7l [927.857964ms] May 19 21:28:45.104: INFO: Got endpoints: latency-svc-ws5wm [950.596363ms] May 19 21:28:45.146: INFO: Created: latency-svc-fxjfw May 19 21:28:45.197: INFO: Got endpoints: latency-svc-fxjfw [1.013063335s] May 19 21:28:45.206: INFO: Created: latency-svc-kmkcq May 19 21:28:45.224: INFO: Got endpoints: latency-svc-kmkcq [953.732277ms] May 19 21:28:45.249: INFO: Created: latency-svc-r4rxc May 19 21:28:45.260: INFO: Got endpoints: latency-svc-r4rxc [948.706322ms] May 19 21:28:45.290: INFO: Created: latency-svc-6gzp7 May 19 21:28:45.341: INFO: Got endpoints: latency-svc-6gzp7 [982.197802ms] May 19 21:28:45.404: INFO: Created: latency-svc-v9vnh May 19 21:28:45.423: INFO: Got endpoints: latency-svc-v9vnh [996.274648ms] May 19 21:28:45.491: INFO: Created: latency-svc-6444j May 19 21:28:45.501: INFO: Got endpoints: latency-svc-6444j [1.02096091s] May 19 21:28:45.525: INFO: Created: latency-svc-vhv48 May 19 21:28:45.543: INFO: Got endpoints: latency-svc-vhv48 [968.785906ms] May 19 21:28:45.573: INFO: Created: latency-svc-tdzfq May 19 21:28:45.623: INFO: Got endpoints: latency-svc-tdzfq [1.010568743s] May 19 21:28:45.632: INFO: Created: latency-svc-j4w2v May 19 21:28:45.646: INFO: Got endpoints: latency-svc-j4w2v [978.778764ms] May 19 21:28:45.686: INFO: Created: latency-svc-777vr May 19 21:28:45.766: INFO: Got endpoints: latency-svc-777vr [1.032599346s] May 19 21:28:45.776: INFO: Created: latency-svc-gxb2w May 19 21:28:45.791: INFO: Got endpoints: latency-svc-gxb2w [1.003380911s] May 19 21:28:45.812: INFO: Created: latency-svc-shr5f May 19 21:28:45.827: INFO: Got endpoints: latency-svc-shr5f [931.496817ms] May 19 21:28:45.848: INFO: Created: latency-svc-hwj4v May 19 21:28:45.934: INFO: Got endpoints: latency-svc-hwj4v [1.001527263s] May 19 21:28:45.944: INFO: Created: latency-svc-k6xvw May 19 21:28:45.964: INFO: Got endpoints: latency-svc-k6xvw [904.185821ms] May 19 21:28:45.986: INFO: Created: latency-svc-p5wm4 May 19 21:28:46.003: INFO: Got endpoints: latency-svc-p5wm4 [898.515429ms] May 19 21:28:46.022: INFO: Created: latency-svc-tb7cp May 19 21:28:46.078: INFO: Got endpoints: latency-svc-tb7cp [880.675653ms] May 19 21:28:46.100: INFO: Created: latency-svc-n726s May 19 21:28:46.117: INFO: Got endpoints: latency-svc-n726s [893.02472ms] May 19 21:28:46.142: INFO: Created: latency-svc-t75s7 May 19 21:28:46.159: INFO: Got endpoints: latency-svc-t75s7 [899.372659ms] May 19 21:28:46.221: INFO: Created: latency-svc-pssrd May 19 21:28:46.263: INFO: Got endpoints: latency-svc-pssrd [921.481918ms] May 19 21:28:46.306: INFO: Created: latency-svc-fjcld May 19 21:28:46.316: INFO: Got endpoints: latency-svc-fjcld [893.29666ms] May 19 21:28:46.371: INFO: Created: latency-svc-ff98d May 19 21:28:46.376: INFO: Got endpoints: latency-svc-ff98d [874.683447ms] May 19 21:28:46.400: INFO: Created: latency-svc-gx79c May 19 21:28:46.419: INFO: Got endpoints: latency-svc-gx79c [875.269299ms] May 19 21:28:46.448: INFO: Created: latency-svc-mwgqj May 19 21:28:46.509: INFO: Got endpoints: latency-svc-mwgqj [885.95615ms] May 19 21:28:46.532: INFO: Created: latency-svc-d9f7z May 19 21:28:46.558: INFO: Got endpoints: latency-svc-d9f7z [912.100287ms] May 19 21:28:46.594: INFO: Created: latency-svc-zz6rn May 19 21:28:46.654: INFO: Got endpoints: latency-svc-zz6rn [887.08014ms] May 19 21:28:46.665: INFO: Created: latency-svc-9t8jm May 19 21:28:46.684: INFO: Got endpoints: latency-svc-9t8jm [892.942768ms] May 19 21:28:46.718: INFO: Created: latency-svc-k69s5 May 19 21:28:46.743: INFO: Got endpoints: latency-svc-k69s5 [915.324437ms] May 19 21:28:46.809: INFO: Created: latency-svc-xv9s7 May 19 21:28:46.844: INFO: Got endpoints: latency-svc-xv9s7 [910.066354ms] May 19 21:28:46.893: INFO: Created: latency-svc-w8x8z May 19 21:28:46.901: INFO: Got endpoints: latency-svc-w8x8z [937.602242ms] May 19 21:28:46.958: INFO: Created: latency-svc-t5zvt May 19 21:28:46.967: INFO: Got endpoints: latency-svc-t5zvt [964.612126ms] May 19 21:28:47.001: INFO: Created: latency-svc-w4jmv May 19 21:28:47.016: INFO: Got endpoints: latency-svc-w4jmv [937.811711ms] May 19 21:28:47.124: INFO: Created: latency-svc-7xkjr May 19 21:28:47.127: INFO: Got endpoints: latency-svc-7xkjr [1.009897169s] May 19 21:28:47.193: INFO: Created: latency-svc-vwhlm May 19 21:28:47.208: INFO: Got endpoints: latency-svc-vwhlm [1.04907198s] May 19 21:28:47.257: INFO: Created: latency-svc-zsmks May 19 21:28:47.259: INFO: Got endpoints: latency-svc-zsmks [996.581475ms] May 19 21:28:47.294: INFO: Created: latency-svc-rx82x May 19 21:28:47.311: INFO: Got endpoints: latency-svc-rx82x [994.66493ms] May 19 21:28:47.336: INFO: Created: latency-svc-q9pwd May 19 21:28:47.348: INFO: Got endpoints: latency-svc-q9pwd [972.089287ms] May 19 21:28:47.407: INFO: Created: latency-svc-gw68f May 19 21:28:47.414: INFO: Got endpoints: latency-svc-gw68f [995.250392ms] May 19 21:28:47.452: INFO: Created: latency-svc-58thw May 19 21:28:47.468: INFO: Got endpoints: latency-svc-58thw [959.029078ms] May 19 21:28:47.492: INFO: Created: latency-svc-rl5zn May 19 21:28:47.568: INFO: Got endpoints: latency-svc-rl5zn [1.010152444s] May 19 21:28:47.572: INFO: Created: latency-svc-m6n8s May 19 21:28:47.583: INFO: Got endpoints: latency-svc-m6n8s [929.227548ms] May 19 21:28:47.607: INFO: Created: latency-svc-k2ghc May 19 21:28:47.625: INFO: Got endpoints: latency-svc-k2ghc [940.892491ms] May 19 21:28:47.648: INFO: Created: latency-svc-vrv5v May 19 21:28:47.667: INFO: Got endpoints: latency-svc-vrv5v [924.845387ms] May 19 21:28:47.718: INFO: Created: latency-svc-6jq4f May 19 21:28:47.728: INFO: Got endpoints: latency-svc-6jq4f [884.176234ms] May 19 21:28:47.769: INFO: Created: latency-svc-tp754 May 19 21:28:47.788: INFO: Got endpoints: latency-svc-tp754 [886.974269ms] May 19 21:28:47.811: INFO: Created: latency-svc-9bhh9 May 19 21:28:47.856: INFO: Got endpoints: latency-svc-9bhh9 [888.254026ms] May 19 21:28:47.872: INFO: Created: latency-svc-wgtsq May 19 21:28:47.897: INFO: Got endpoints: latency-svc-wgtsq [881.57283ms] May 19 21:28:47.924: INFO: Created: latency-svc-rhdkv May 19 21:28:47.939: INFO: Got endpoints: latency-svc-rhdkv [812.434785ms] May 19 21:28:47.939: INFO: Latencies: [173.482724ms 184.287145ms 294.020015ms 341.251151ms 384.954978ms 456.780163ms 516.323577ms 603.664345ms 661.136163ms 733.787711ms 746.274834ms 747.506609ms 749.191365ms 768.053872ms 775.673427ms 779.9703ms 780.211577ms 785.939009ms 790.764203ms 792.78147ms 799.770204ms 800.430196ms 803.044145ms 806.20961ms 808.526923ms 810.318918ms 812.434785ms 814.715361ms 817.244646ms 839.398631ms 841.471042ms 857.7906ms 859.010415ms 865.373095ms 866.527039ms 872.501981ms 874.683447ms 875.250079ms 875.269299ms 875.296781ms 880.675653ms 881.57283ms 883.875874ms 884.176234ms 885.95615ms 886.974269ms 887.08014ms 888.254026ms 892.461599ms 892.942768ms 893.02472ms 893.29666ms 895.118475ms 898.515429ms 899.372659ms 902.311421ms 902.556855ms 904.185821ms 910.066354ms 912.100287ms 915.324437ms 921.481918ms 924.845387ms 927.857964ms 929.227548ms 931.496817ms 931.512218ms 937.602242ms 937.811711ms 940.892491ms 948.706322ms 950.375704ms 950.433095ms 950.596363ms 953.419442ms 953.732277ms 955.807694ms 959.029078ms 964.612126ms 964.811221ms 966.473401ms 966.636931ms 968.785906ms 972.065768ms 972.089287ms 978.778764ms 979.004298ms 982.197802ms 983.627614ms 983.873784ms 989.684661ms 990.937724ms 992.432681ms 994.66493ms 995.250392ms 996.274648ms 996.581475ms 1.001362337s 1.001527263s 1.001638354s 1.003380911s 1.006868497s 1.007434945s 1.009897169s 1.010152444s 1.010568743s 1.011300172s 1.012619027s 1.013063335s 1.014531173s 1.015734581s 1.016607767s 1.017522672s 1.019013165s 1.02096091s 1.022917682s 1.025420964s 1.02563783s 1.027758123s 1.027888969s 1.029918247s 1.031830049s 1.032599346s 1.034831996s 1.035046376s 1.036488974s 1.036965199s 1.037827898s 1.039566707s 1.042030728s 1.044026178s 1.044733543s 1.047014558s 1.04907198s 1.049143501s 1.050269684s 1.053190741s 1.056088487s 1.060624543s 1.06368486s 1.068173662s 1.069656258s 1.071943745s 1.077666721s 1.079500018s 1.079914196s 1.080558729s 1.082895134s 1.083858704s 1.084641061s 1.089767616s 1.090947848s 1.091621358s 1.094556787s 1.097458902s 1.097991181s 1.100255507s 1.101442882s 1.102844797s 1.107915202s 1.108325134s 1.109125161s 1.111192661s 1.116503581s 1.118779127s 1.120967044s 1.126139365s 1.126207042s 1.126917736s 1.1303987s 1.139401582s 1.13991294s 1.140291227s 1.14610246s 1.150398542s 1.15591508s 1.167198053s 1.168611689s 1.170769421s 1.174256012s 1.175126264s 1.176030976s 1.176190926s 1.178698514s 1.182473108s 1.186158494s 1.192148389s 1.204864945s 1.205670047s 1.232811298s 1.236115032s 1.243627413s 1.24637462s 1.253873309s 1.262206437s 1.264558733s 1.270741788s 1.288503018s 1.289272867s 1.293209157s] May 19 21:28:47.941: INFO: 50 %ile: 1.003380911s May 19 21:28:47.941: INFO: 90 %ile: 1.175126264s May 19 21:28:47.941: INFO: 99 %ile: 1.289272867s May 19 21:28:47.941: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:28:47.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8540" for this suite. • [SLOW TEST:17.769 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":94,"skipped":1699,"failed":0} [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:28:47.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-efc2f055-86b5-4558-8b6e-5e8a2a670f44 in namespace container-probe-965 May 19 21:28:52.050: INFO: Started pod busybox-efc2f055-86b5-4558-8b6e-5e8a2a670f44 in namespace container-probe-965 STEP: checking the pod's current state and verifying that restartCount is present May 19 21:28:52.054: INFO: Initial restart count of pod busybox-efc2f055-86b5-4558-8b6e-5e8a2a670f44 is 0 May 19 21:29:46.516: INFO: Restart count of pod container-probe-965/busybox-efc2f055-86b5-4558-8b6e-5e8a2a670f44 is now 1 (54.462197657s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:29:46.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-965" for this suite. • [SLOW TEST:58.599 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1699,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:29:46.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 19 21:29:53.517: INFO: 8 pods remaining May 19 21:29:53.517: INFO: 0 pods has nil DeletionTimestamp May 19 21:29:53.517: INFO: May 19 21:29:54.572: INFO: 0 pods remaining May 19 21:29:54.572: INFO: 0 pods has nil DeletionTimestamp May 19 21:29:54.572: INFO: May 19 21:29:54.995: INFO: 0 pods remaining May 19 21:29:54.995: INFO: 0 pods has nil DeletionTimestamp May 19 21:29:54.995: INFO: STEP: Gathering metrics W0519 21:29:56.640633 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:29:56.640: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:29:56.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5310" for this suite. • [SLOW TEST:10.312 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":96,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:29:56.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:29:57.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7036' May 19 21:29:58.016: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 21:29:58.016: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 19 21:29:58.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7036' May 19 21:29:59.401: INFO: stderr: "" May 19 21:29:59.401: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:29:59.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7036" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":97,"skipped":1778,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:29:59.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:29:59.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64" in namespace "downward-api-6922" to be "success or failure" May 19 21:29:59.610: INFO: Pod "downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64": Phase="Pending", Reason="", readiness=false. Elapsed: 24.704137ms May 19 21:30:01.713: INFO: Pod "downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12798347s May 19 21:30:03.717: INFO: Pod "downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131318231s STEP: Saw pod success May 19 21:30:03.717: INFO: Pod "downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64" satisfied condition "success or failure" May 19 21:30:03.719: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64 container client-container: STEP: delete the pod May 19 21:30:03.755: INFO: Waiting for pod downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64 to disappear May 19 21:30:03.781: INFO: Pod downwardapi-volume-60a5f42f-4c19-4dc9-ac9c-d7fdf0204f64 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:03.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6922" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1780,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:03.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 21:30:07.954: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:07.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1134" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1795,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:07.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:30:08.099: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 19 21:30:13.136: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 21:30:13.136: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 19 21:30:13.172: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1777 /apis/apps/v1/namespaces/deployment-1777/deployments/test-cleanup-deployment 986def4d-8714-4f07-9a39-d16778e28e3e 17531573 1 2020-05-19 21:30:13 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00383b168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 19 21:30:13.251: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-1777 /apis/apps/v1/namespaces/deployment-1777/replicasets/test-cleanup-deployment-55ffc6b7b6 9497a518-f489-4aab-bebf-52fb698d0b96 17531579 1 2020-05-19 21:30:13 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 986def4d-8714-4f07-9a39-d16778e28e3e 0xc00383b577 0xc00383b578}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00383b5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 21:30:13.251: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 19 21:30:13.251: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1777 /apis/apps/v1/namespaces/deployment-1777/replicasets/test-cleanup-controller 8928146c-8457-489d-bfb3-7a43fe15178d 17531574 1 2020-05-19 21:30:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 986def4d-8714-4f07-9a39-d16778e28e3e 0xc00383b48f 0xc00383b4a0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00383b508 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 21:30:13.281: INFO: Pod "test-cleanup-controller-7wlfk" is available: &Pod{ObjectMeta:{test-cleanup-controller-7wlfk test-cleanup-controller- deployment-1777 /api/v1/namespaces/deployment-1777/pods/test-cleanup-controller-7wlfk f36703ea-f1a1-45f0-acb7-35c1466af3b3 17531561 0 2020-05-19 21:30:08 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 8928146c-8457-489d-bfb3-7a43fe15178d 0xc00383ba57 0xc00383ba58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-smtqs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-smtqs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-smtqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:30:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:30:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:30:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:30:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.241,StartTime:2020-05-19 21:30:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 21:30:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f713de51e195270abc214275647087a90fa03695df0a9fbb688c83d73873d8e6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 21:30:13.281: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-5hcbc" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-5hcbc test-cleanup-deployment-55ffc6b7b6- deployment-1777 /api/v1/namespaces/deployment-1777/pods/test-cleanup-deployment-55ffc6b7b6-5hcbc f68cb50a-d506-49f7-ab45-003510a56918 17531581 0 2020-05-19 21:30:13 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 9497a518-f489-4aab-bebf-52fb698d0b96 0xc00383bbf7 0xc00383bbf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-smtqs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-smtqs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-smtqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:30:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:13.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1777" for this suite. • [SLOW TEST:5.348 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":100,"skipped":1807,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:13.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5471 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 21:30:13.389: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 21:30:35.536: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.243:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:30:35.536: INFO: >>> kubeConfig: /root/.kube/config I0519 21:30:35.571546 6 log.go:172] (0xc001948210) (0xc001a2f2c0) Create stream I0519 21:30:35.571580 6 log.go:172] (0xc001948210) (0xc001a2f2c0) Stream added, broadcasting: 1 I0519 21:30:35.573807 6 log.go:172] (0xc001948210) Reply frame received for 1 I0519 21:30:35.573859 6 log.go:172] (0xc001948210) (0xc0027aa960) Create stream I0519 21:30:35.573873 6 log.go:172] (0xc001948210) (0xc0027aa960) Stream added, broadcasting: 3 I0519 21:30:35.575071 6 log.go:172] (0xc001948210) Reply frame received for 3 I0519 21:30:35.575130 6 log.go:172] (0xc001948210) (0xc001a2f360) Create stream I0519 21:30:35.575157 6 log.go:172] (0xc001948210) (0xc001a2f360) Stream added, broadcasting: 5 I0519 21:30:35.576336 6 log.go:172] (0xc001948210) Reply frame received for 5 I0519 21:30:35.703382 6 log.go:172] (0xc001948210) Data frame received for 5 I0519 21:30:35.703433 6 log.go:172] (0xc001a2f360) (5) Data frame handling I0519 21:30:35.703467 6 log.go:172] (0xc001948210) Data frame received for 3 I0519 21:30:35.703488 6 log.go:172] (0xc0027aa960) (3) Data frame handling I0519 21:30:35.703509 6 log.go:172] (0xc0027aa960) (3) Data frame sent I0519 21:30:35.703523 6 log.go:172] (0xc001948210) Data frame received for 3 I0519 21:30:35.703535 6 log.go:172] (0xc0027aa960) (3) Data frame handling I0519 21:30:35.705807 6 log.go:172] (0xc001948210) Data frame received for 1 I0519 21:30:35.705910 6 log.go:172] (0xc001a2f2c0) (1) Data frame handling I0519 21:30:35.705918 6 log.go:172] (0xc001a2f2c0) (1) Data frame sent I0519 21:30:35.705926 6 log.go:172] (0xc001948210) (0xc001a2f2c0) Stream removed, broadcasting: 1 I0519 21:30:35.705947 6 log.go:172] (0xc001948210) Go away received I0519 21:30:35.705970 6 log.go:172] (0xc001948210) (0xc001a2f2c0) Stream removed, broadcasting: 1 I0519 21:30:35.705996 6 log.go:172] (0xc001948210) (0xc0027aa960) Stream removed, broadcasting: 3 I0519 21:30:35.706014 6 log.go:172] (0xc001948210) (0xc001a2f360) Stream removed, broadcasting: 5 May 19 21:30:35.706: INFO: Found all expected endpoints: [netserver-0] May 19 21:30:35.708: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.18:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:30:35.708: INFO: >>> kubeConfig: /root/.kube/config I0519 21:30:35.729830 6 log.go:172] (0xc0019489a0) (0xc001a2f7c0) Create stream I0519 21:30:35.729853 6 log.go:172] (0xc0019489a0) (0xc001a2f7c0) Stream added, broadcasting: 1 I0519 21:30:35.731342 6 log.go:172] (0xc0019489a0) Reply frame received for 1 I0519 21:30:35.731363 6 log.go:172] (0xc0019489a0) (0xc001a2f860) Create stream I0519 21:30:35.731372 6 log.go:172] (0xc0019489a0) (0xc001a2f860) Stream added, broadcasting: 3 I0519 21:30:35.732185 6 log.go:172] (0xc0019489a0) Reply frame received for 3 I0519 21:30:35.732219 6 log.go:172] (0xc0019489a0) (0xc00159a1e0) Create stream I0519 21:30:35.732231 6 log.go:172] (0xc0019489a0) (0xc00159a1e0) Stream added, broadcasting: 5 I0519 21:30:35.733079 6 log.go:172] (0xc0019489a0) Reply frame received for 5 I0519 21:30:35.810100 6 log.go:172] (0xc0019489a0) Data frame received for 3 I0519 21:30:35.810125 6 log.go:172] (0xc001a2f860) (3) Data frame handling I0519 21:30:35.810137 6 log.go:172] (0xc001a2f860) (3) Data frame sent I0519 21:30:35.810143 6 log.go:172] (0xc0019489a0) Data frame received for 3 I0519 21:30:35.810148 6 log.go:172] (0xc001a2f860) (3) Data frame handling I0519 21:30:35.810375 6 log.go:172] (0xc0019489a0) Data frame received for 5 I0519 21:30:35.810396 6 log.go:172] (0xc00159a1e0) (5) Data frame handling I0519 21:30:35.811836 6 log.go:172] (0xc0019489a0) Data frame received for 1 I0519 21:30:35.811856 6 log.go:172] (0xc001a2f7c0) (1) Data frame handling I0519 21:30:35.811868 6 log.go:172] (0xc001a2f7c0) (1) Data frame sent I0519 21:30:35.811889 6 log.go:172] (0xc0019489a0) (0xc001a2f7c0) Stream removed, broadcasting: 1 I0519 21:30:35.811955 6 log.go:172] (0xc0019489a0) Go away received I0519 21:30:35.811987 6 log.go:172] (0xc0019489a0) (0xc001a2f7c0) Stream removed, broadcasting: 1 I0519 21:30:35.812003 6 log.go:172] (0xc0019489a0) (0xc001a2f860) Stream removed, broadcasting: 3 I0519 21:30:35.812013 6 log.go:172] (0xc0019489a0) (0xc00159a1e0) Stream removed, broadcasting: 5 May 19 21:30:35.812: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:35.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5471" for this suite. • [SLOW TEST:22.495 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1823,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:35.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 19 21:30:35.929: INFO: Waiting up to 5m0s for pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a" in namespace "var-expansion-464" to be "success or failure" May 19 21:30:35.933: INFO: Pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.890413ms May 19 21:30:37.937: INFO: Pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007867658s May 19 21:30:39.942: INFO: Pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a": Phase="Running", Reason="", readiness=true. Elapsed: 4.012639053s May 19 21:30:42.032: INFO: Pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102593086s STEP: Saw pod success May 19 21:30:42.032: INFO: Pod "var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a" satisfied condition "success or failure" May 19 21:30:42.036: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a container dapi-container: STEP: delete the pod May 19 21:30:42.245: INFO: Waiting for pod var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a to disappear May 19 21:30:42.282: INFO: Pod var-expansion-7e6c9683-f68f-4767-b6e1-d2ac9b75748a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:42.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-464" for this suite. • [SLOW TEST:6.471 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1830,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:42.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:30:42.752: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 6.493807ms) May 19 21:30:42.755: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.96143ms) May 19 21:30:42.758: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.641543ms) May 19 21:30:42.773: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 15.526849ms) May 19 21:30:42.778: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.256202ms) May 19 21:30:42.781: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.53405ms) May 19 21:30:42.785: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.641046ms) May 19 21:30:42.788: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.139648ms) May 19 21:30:42.791: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.700644ms) May 19 21:30:42.794: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.229147ms) May 19 21:30:42.796: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.397906ms) May 19 21:30:42.799: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.431116ms) May 19 21:30:42.843: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 44.294419ms) May 19 21:30:42.937: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 93.734504ms) May 19 21:30:42.972: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 34.724561ms) May 19 21:30:42.989: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 17.061167ms) May 19 21:30:43.001: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 11.625146ms) May 19 21:30:43.004: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.371401ms) May 19 21:30:43.008: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.580131ms) May 19 21:30:43.012: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.781032ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:43.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8800" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":103,"skipped":1832,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:43.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:30:43.358: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:30:45.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520643, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520643, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520643, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725520643, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:30:48.407: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:30:48.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6452" for this suite. STEP: Destroying namespace "webhook-6452-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.630 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":104,"skipped":1835,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:30:48.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-47jp STEP: Creating a pod to test atomic-volume-subpath May 19 21:30:48.818: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-47jp" in namespace "subpath-689" to be "success or failure" May 19 21:30:48.839: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Pending", Reason="", readiness=false. Elapsed: 21.205422ms May 19 21:30:50.850: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032301502s May 19 21:30:52.855: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 4.036913557s May 19 21:30:54.858: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 6.04061318s May 19 21:30:56.863: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 8.045257709s May 19 21:30:58.867: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 10.04956264s May 19 21:31:00.872: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 12.053972666s May 19 21:31:02.876: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 14.058366997s May 19 21:31:04.880: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 16.062526613s May 19 21:31:06.885: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 18.067457351s May 19 21:31:08.890: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 20.071653723s May 19 21:31:10.895: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Running", Reason="", readiness=true. Elapsed: 22.076731606s May 19 21:31:12.899: INFO: Pod "pod-subpath-test-downwardapi-47jp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081231536s STEP: Saw pod success May 19 21:31:12.899: INFO: Pod "pod-subpath-test-downwardapi-47jp" satisfied condition "success or failure" May 19 21:31:12.902: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-47jp container test-container-subpath-downwardapi-47jp: STEP: delete the pod May 19 21:31:12.938: INFO: Waiting for pod pod-subpath-test-downwardapi-47jp to disappear May 19 21:31:12.947: INFO: Pod pod-subpath-test-downwardapi-47jp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-47jp May 19 21:31:12.947: INFO: Deleting pod "pod-subpath-test-downwardapi-47jp" in namespace "subpath-689" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:31:12.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-689" for this suite. • [SLOW TEST:24.326 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1836,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:31:12.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 19 21:31:13.033: INFO: Waiting up to 5m0s for pod "downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d" in namespace "downward-api-9697" to be "success or failure" May 19 21:31:13.036: INFO: Pod "downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.518104ms May 19 21:31:15.080: INFO: Pod "downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04721657s May 19 21:31:17.084: INFO: Pod "downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051257046s STEP: Saw pod success May 19 21:31:17.084: INFO: Pod "downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d" satisfied condition "success or failure" May 19 21:31:17.087: INFO: Trying to get logs from node jerma-worker2 pod downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d container dapi-container: STEP: delete the pod May 19 21:31:17.133: INFO: Waiting for pod downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d to disappear May 19 21:31:17.182: INFO: Pod downward-api-e5a715b1-4ef3-423f-80b3-1f6e1ae5955d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:31:17.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9697" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1838,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:31:17.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2373.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:31:25.355: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.358: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.361: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.363: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.370: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.372: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.374: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.376: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:25.381: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:30.386: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.391: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.394: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.397: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.406: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.409: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.412: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.415: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:30.421: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:35.386: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.390: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.394: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.397: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.406: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.408: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.412: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.414: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:35.419: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:40.410: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.413: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.416: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.418: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.424: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.427: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.429: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.431: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:40.436: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:45.385: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.388: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.390: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.393: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.400: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.403: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.406: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.408: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:45.415: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:50.385: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.388: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.390: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.393: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.400: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.403: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.406: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.408: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local from pod dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7: the server could not find the requested resource (get pods dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7) May 19 21:31:50.413: INFO: Lookups using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2373.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2373.svc.cluster.local jessie_udp@dns-test-service-2.dns-2373.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2373.svc.cluster.local] May 19 21:31:55.418: INFO: DNS probes using dns-2373/dns-test-da4057c0-1ece-4384-a49b-dd82c8320ed7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:31:55.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2373" for this suite. • [SLOW TEST:38.353 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":107,"skipped":1844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:31:55.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-744 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 19 21:31:56.219: INFO: Found 0 stateful pods, waiting for 3 May 19 21:32:06.223: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:32:06.223: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:32:06.223: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 21:32:16.223: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:32:16.223: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:32:16.223: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 19 21:32:16.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-744 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:32:16.532: INFO: stderr: "I0519 21:32:16.360262 1239 log.go:172] (0xc0009bb290) (0xc000cc4320) Create stream\nI0519 21:32:16.360321 1239 log.go:172] (0xc0009bb290) (0xc000cc4320) Stream added, broadcasting: 1\nI0519 21:32:16.363114 1239 log.go:172] (0xc0009bb290) Reply frame received for 1\nI0519 21:32:16.363169 1239 log.go:172] (0xc0009bb290) (0xc000c45040) Create stream\nI0519 21:32:16.363186 1239 log.go:172] (0xc0009bb290) (0xc000c45040) Stream added, broadcasting: 3\nI0519 21:32:16.364039 1239 log.go:172] (0xc0009bb290) Reply frame received for 3\nI0519 21:32:16.364064 1239 log.go:172] (0xc0009bb290) (0xc000a2c1e0) Create stream\nI0519 21:32:16.364072 1239 log.go:172] (0xc0009bb290) (0xc000a2c1e0) Stream added, broadcasting: 5\nI0519 21:32:16.364925 1239 log.go:172] (0xc0009bb290) Reply frame received for 5\nI0519 21:32:16.455735 1239 log.go:172] (0xc0009bb290) Data frame received for 5\nI0519 21:32:16.455758 1239 log.go:172] (0xc000a2c1e0) (5) Data frame handling\nI0519 21:32:16.455776 1239 log.go:172] (0xc000a2c1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:32:16.525402 1239 log.go:172] (0xc0009bb290) Data frame received for 3\nI0519 21:32:16.525456 1239 log.go:172] (0xc000c45040) (3) Data frame handling\nI0519 21:32:16.525502 1239 log.go:172] (0xc000c45040) (3) Data frame sent\nI0519 21:32:16.525526 1239 log.go:172] (0xc0009bb290) Data frame received for 3\nI0519 21:32:16.525538 1239 log.go:172] (0xc000c45040) (3) Data frame handling\nI0519 21:32:16.525577 1239 log.go:172] (0xc0009bb290) Data frame received for 5\nI0519 21:32:16.525604 1239 log.go:172] (0xc000a2c1e0) (5) Data frame handling\nI0519 21:32:16.528149 1239 log.go:172] (0xc0009bb290) Data frame received for 1\nI0519 21:32:16.528178 1239 log.go:172] (0xc000cc4320) (1) Data frame handling\nI0519 21:32:16.528199 1239 log.go:172] (0xc000cc4320) (1) Data frame sent\nI0519 21:32:16.528221 1239 log.go:172] (0xc0009bb290) (0xc000cc4320) Stream removed, broadcasting: 1\nI0519 21:32:16.528252 1239 log.go:172] (0xc0009bb290) Go away received\nI0519 21:32:16.528547 1239 log.go:172] (0xc0009bb290) (0xc000cc4320) Stream removed, broadcasting: 1\nI0519 21:32:16.528571 1239 log.go:172] (0xc0009bb290) (0xc000c45040) Stream removed, broadcasting: 3\nI0519 21:32:16.528580 1239 log.go:172] (0xc0009bb290) (0xc000a2c1e0) Stream removed, broadcasting: 5\n" May 19 21:32:16.532: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:32:16.532: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 19 21:32:26.565: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 19 21:32:36.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-744 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:32:36.888: INFO: stderr: "I0519 21:32:36.775727 1259 log.go:172] (0xc000afd340) (0xc000a8e640) Create stream\nI0519 21:32:36.775780 1259 log.go:172] (0xc000afd340) (0xc000a8e640) Stream added, broadcasting: 1\nI0519 21:32:36.782516 1259 log.go:172] (0xc000afd340) Reply frame received for 1\nI0519 21:32:36.782638 1259 log.go:172] (0xc000afd340) (0xc0005f8640) Create stream\nI0519 21:32:36.782657 1259 log.go:172] (0xc000afd340) (0xc0005f8640) Stream added, broadcasting: 3\nI0519 21:32:36.796677 1259 log.go:172] (0xc000afd340) Reply frame received for 3\nI0519 21:32:36.796713 1259 log.go:172] (0xc000afd340) (0xc00073b400) Create stream\nI0519 21:32:36.796721 1259 log.go:172] (0xc000afd340) (0xc00073b400) Stream added, broadcasting: 5\nI0519 21:32:36.798128 1259 log.go:172] (0xc000afd340) Reply frame received for 5\nI0519 21:32:36.881538 1259 log.go:172] (0xc000afd340) Data frame received for 3\nI0519 21:32:36.881578 1259 log.go:172] (0xc0005f8640) (3) Data frame handling\nI0519 21:32:36.881616 1259 log.go:172] (0xc0005f8640) (3) Data frame sent\nI0519 21:32:36.881728 1259 log.go:172] (0xc000afd340) Data frame received for 5\nI0519 21:32:36.881760 1259 log.go:172] (0xc00073b400) (5) Data frame handling\nI0519 21:32:36.881770 1259 log.go:172] (0xc00073b400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:32:36.881787 1259 log.go:172] (0xc000afd340) Data frame received for 3\nI0519 21:32:36.881814 1259 log.go:172] (0xc0005f8640) (3) Data frame handling\nI0519 21:32:36.881854 1259 log.go:172] (0xc000afd340) Data frame received for 5\nI0519 21:32:36.881878 1259 log.go:172] (0xc00073b400) (5) Data frame handling\nI0519 21:32:36.883452 1259 log.go:172] (0xc000afd340) Data frame received for 1\nI0519 21:32:36.883476 1259 log.go:172] (0xc000a8e640) (1) Data frame handling\nI0519 21:32:36.883489 1259 log.go:172] (0xc000a8e640) (1) Data frame sent\nI0519 21:32:36.883503 1259 log.go:172] (0xc000afd340) (0xc000a8e640) Stream removed, broadcasting: 1\nI0519 21:32:36.883561 1259 log.go:172] (0xc000afd340) Go away received\nI0519 21:32:36.883915 1259 log.go:172] (0xc000afd340) (0xc000a8e640) Stream removed, broadcasting: 1\nI0519 21:32:36.883939 1259 log.go:172] (0xc000afd340) (0xc0005f8640) Stream removed, broadcasting: 3\nI0519 21:32:36.883957 1259 log.go:172] (0xc000afd340) (0xc00073b400) Stream removed, broadcasting: 5\n" May 19 21:32:36.888: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:32:36.888: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:32:46.909: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update May 19 21:32:46.909: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:32:46.909: INFO: Waiting for Pod statefulset-744/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:32:46.909: INFO: Waiting for Pod statefulset-744/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:32:56.917: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update May 19 21:32:56.917: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:33:06.917: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update May 19 21:33:06.917: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 19 21:33:16.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-744 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:33:17.196: INFO: stderr: "I0519 21:33:17.052854 1279 log.go:172] (0xc0009bc6e0) (0xc000a56000) Create stream\nI0519 21:33:17.052909 1279 log.go:172] (0xc0009bc6e0) (0xc000a56000) Stream added, broadcasting: 1\nI0519 21:33:17.055716 1279 log.go:172] (0xc0009bc6e0) Reply frame received for 1\nI0519 21:33:17.055749 1279 log.go:172] (0xc0009bc6e0) (0xc000a560a0) Create stream\nI0519 21:33:17.055759 1279 log.go:172] (0xc0009bc6e0) (0xc000a560a0) Stream added, broadcasting: 3\nI0519 21:33:17.056588 1279 log.go:172] (0xc0009bc6e0) Reply frame received for 3\nI0519 21:33:17.056638 1279 log.go:172] (0xc0009bc6e0) (0xc000645a40) Create stream\nI0519 21:33:17.056657 1279 log.go:172] (0xc0009bc6e0) (0xc000645a40) Stream added, broadcasting: 5\nI0519 21:33:17.057929 1279 log.go:172] (0xc0009bc6e0) Reply frame received for 5\nI0519 21:33:17.149368 1279 log.go:172] (0xc0009bc6e0) Data frame received for 5\nI0519 21:33:17.149406 1279 log.go:172] (0xc000645a40) (5) Data frame handling\nI0519 21:33:17.149421 1279 log.go:172] (0xc000645a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:33:17.188275 1279 log.go:172] (0xc0009bc6e0) Data frame received for 3\nI0519 21:33:17.188313 1279 log.go:172] (0xc000a560a0) (3) Data frame handling\nI0519 21:33:17.188345 1279 log.go:172] (0xc000a560a0) (3) Data frame sent\nI0519 21:33:17.188358 1279 log.go:172] (0xc0009bc6e0) Data frame received for 3\nI0519 21:33:17.188372 1279 log.go:172] (0xc000a560a0) (3) Data frame handling\nI0519 21:33:17.188446 1279 log.go:172] (0xc0009bc6e0) Data frame received for 5\nI0519 21:33:17.188475 1279 log.go:172] (0xc000645a40) (5) Data frame handling\nI0519 21:33:17.190835 1279 log.go:172] (0xc0009bc6e0) Data frame received for 1\nI0519 21:33:17.190872 1279 log.go:172] (0xc000a56000) (1) Data frame handling\nI0519 21:33:17.190893 1279 log.go:172] (0xc000a56000) (1) Data frame sent\nI0519 21:33:17.190913 1279 log.go:172] (0xc0009bc6e0) (0xc000a56000) Stream removed, broadcasting: 1\nI0519 21:33:17.190941 1279 log.go:172] (0xc0009bc6e0) Go away received\nI0519 21:33:17.191470 1279 log.go:172] (0xc0009bc6e0) (0xc000a56000) Stream removed, broadcasting: 1\nI0519 21:33:17.191496 1279 log.go:172] (0xc0009bc6e0) (0xc000a560a0) Stream removed, broadcasting: 3\nI0519 21:33:17.191510 1279 log.go:172] (0xc0009bc6e0) (0xc000645a40) Stream removed, broadcasting: 5\n" May 19 21:33:17.196: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:33:17.196: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:33:27.249: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 19 21:33:37.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-744 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:33:37.544: INFO: stderr: "I0519 21:33:37.429021 1301 log.go:172] (0xc000a913f0) (0xc000a885a0) Create stream\nI0519 21:33:37.429313 1301 log.go:172] (0xc000a913f0) (0xc000a885a0) Stream added, broadcasting: 1\nI0519 21:33:37.434009 1301 log.go:172] (0xc000a913f0) Reply frame received for 1\nI0519 21:33:37.434051 1301 log.go:172] (0xc000a913f0) (0xc0006006e0) Create stream\nI0519 21:33:37.434062 1301 log.go:172] (0xc000a913f0) (0xc0006006e0) Stream added, broadcasting: 3\nI0519 21:33:37.434850 1301 log.go:172] (0xc000a913f0) Reply frame received for 3\nI0519 21:33:37.434899 1301 log.go:172] (0xc000a913f0) (0xc0007854a0) Create stream\nI0519 21:33:37.434913 1301 log.go:172] (0xc000a913f0) (0xc0007854a0) Stream added, broadcasting: 5\nI0519 21:33:37.435685 1301 log.go:172] (0xc000a913f0) Reply frame received for 5\nI0519 21:33:37.538388 1301 log.go:172] (0xc000a913f0) Data frame received for 3\nI0519 21:33:37.538415 1301 log.go:172] (0xc0006006e0) (3) Data frame handling\nI0519 21:33:37.538426 1301 log.go:172] (0xc0006006e0) (3) Data frame sent\nI0519 21:33:37.538434 1301 log.go:172] (0xc000a913f0) Data frame received for 3\nI0519 21:33:37.538441 1301 log.go:172] (0xc0006006e0) (3) Data frame handling\nI0519 21:33:37.538467 1301 log.go:172] (0xc000a913f0) Data frame received for 5\nI0519 21:33:37.538475 1301 log.go:172] (0xc0007854a0) (5) Data frame handling\nI0519 21:33:37.538488 1301 log.go:172] (0xc0007854a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:33:37.538507 1301 log.go:172] (0xc000a913f0) Data frame received for 5\nI0519 21:33:37.538537 1301 log.go:172] (0xc0007854a0) (5) Data frame handling\nI0519 21:33:37.539541 1301 log.go:172] (0xc000a913f0) Data frame received for 1\nI0519 21:33:37.539566 1301 log.go:172] (0xc000a885a0) (1) Data frame handling\nI0519 21:33:37.539580 1301 log.go:172] (0xc000a885a0) (1) Data frame sent\nI0519 21:33:37.539703 1301 log.go:172] (0xc000a913f0) (0xc000a885a0) Stream removed, broadcasting: 1\nI0519 21:33:37.539743 1301 log.go:172] (0xc000a913f0) Go away received\nI0519 21:33:37.540056 1301 log.go:172] (0xc000a913f0) (0xc000a885a0) Stream removed, broadcasting: 1\nI0519 21:33:37.540079 1301 log.go:172] (0xc000a913f0) (0xc0006006e0) Stream removed, broadcasting: 3\nI0519 21:33:37.540089 1301 log.go:172] (0xc000a913f0) (0xc0007854a0) Stream removed, broadcasting: 5\n" May 19 21:33:37.544: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:33:37.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:33:47.562: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update May 19 21:33:47.563: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 19 21:33:47.563: INFO: Waiting for Pod statefulset-744/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 19 21:33:47.563: INFO: Waiting for Pod statefulset-744/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 19 21:33:57.568: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update May 19 21:33:57.568: INFO: Waiting for Pod statefulset-744/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 19 21:33:57.568: INFO: Waiting for Pod statefulset-744/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 19 21:34:07.570: INFO: Waiting for StatefulSet statefulset-744/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 21:34:17.570: INFO: Deleting all statefulset in ns statefulset-744 May 19 21:34:17.573: INFO: Scaling statefulset ss2 to 0 May 19 21:34:47.592: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:34:47.594: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:34:47.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-744" for this suite. • [SLOW TEST:172.070 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":108,"skipped":1871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:34:47.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:34:47.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1552' May 19 21:34:47.765: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 21:34:47.765: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 19 21:34:49.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1552' May 19 21:34:50.028: INFO: stderr: "" May 19 21:34:50.028: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:34:50.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1552" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":109,"skipped":1968,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:34:50.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 21:34:50.427: INFO: Waiting up to 5m0s for pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab" in namespace "emptydir-3892" to be "success or failure" May 19 21:34:50.437: INFO: Pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.308668ms May 19 21:34:52.442: INFO: Pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015320893s May 19 21:34:54.446: INFO: Pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab": Phase="Running", Reason="", readiness=true. Elapsed: 4.018869794s May 19 21:34:56.450: INFO: Pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022902881s STEP: Saw pod success May 19 21:34:56.450: INFO: Pod "pod-29ecb911-422e-4941-8a28-c1ef460afbab" satisfied condition "success or failure" May 19 21:34:56.452: INFO: Trying to get logs from node jerma-worker pod pod-29ecb911-422e-4941-8a28-c1ef460afbab container test-container: STEP: delete the pod May 19 21:34:56.492: INFO: Waiting for pod pod-29ecb911-422e-4941-8a28-c1ef460afbab to disappear May 19 21:34:56.502: INFO: Pod pod-29ecb911-422e-4941-8a28-c1ef460afbab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:34:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3892" for this suite. • [SLOW TEST:6.472 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:34:56.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:34:56.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3800' May 19 21:34:56.688: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 21:34:56.688: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 19 21:34:56.724: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-lq7sh] May 19 21:34:56.724: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-lq7sh" in namespace "kubectl-3800" to be "running and ready" May 19 21:34:56.727: INFO: Pod "e2e-test-httpd-rc-lq7sh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.082811ms May 19 21:34:58.731: INFO: Pod "e2e-test-httpd-rc-lq7sh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007275861s May 19 21:35:00.735: INFO: Pod "e2e-test-httpd-rc-lq7sh": Phase="Running", Reason="", readiness=true. Elapsed: 4.011461646s May 19 21:35:00.735: INFO: Pod "e2e-test-httpd-rc-lq7sh" satisfied condition "running and ready" May 19 21:35:00.735: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-lq7sh] May 19 21:35:00.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3800' May 19 21:35:00.868: INFO: stderr: "" May 19 21:35:00.868: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.253. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.253. Set the 'ServerName' directive globally to suppress this message\n[Tue May 19 21:34:59.343600 2020] [mpm_event:notice] [pid 1:tid 139967240448872] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 19 21:34:59.343661 2020] [core:notice] [pid 1:tid 139967240448872] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 19 21:35:00.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3800' May 19 21:35:00.987: INFO: stderr: "" May 19 21:35:00.987: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:35:00.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3800" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":111,"skipped":2008,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:35:01.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0519 21:35:11.138915 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:35:11.138: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:35:11.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8344" for this suite. • [SLOW TEST:10.129 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":112,"skipped":2009,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:35:11.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4462 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-4462 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4462 May 19 21:35:11.245: INFO: Found 0 stateful pods, waiting for 1 May 19 21:35:21.249: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 19 21:35:21.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:35:21.506: INFO: stderr: "I0519 21:35:21.374397 1422 log.go:172] (0xc000502000) (0xc000615a40) Create stream\nI0519 21:35:21.374453 1422 log.go:172] (0xc000502000) (0xc000615a40) Stream added, broadcasting: 1\nI0519 21:35:21.376590 1422 log.go:172] (0xc000502000) Reply frame received for 1\nI0519 21:35:21.376626 1422 log.go:172] (0xc000502000) (0xc000a7a000) Create stream\nI0519 21:35:21.376633 1422 log.go:172] (0xc000502000) (0xc000a7a000) Stream added, broadcasting: 3\nI0519 21:35:21.377630 1422 log.go:172] (0xc000502000) Reply frame received for 3\nI0519 21:35:21.377686 1422 log.go:172] (0xc000502000) (0xc000a7a0a0) Create stream\nI0519 21:35:21.377710 1422 log.go:172] (0xc000502000) (0xc000a7a0a0) Stream added, broadcasting: 5\nI0519 21:35:21.378769 1422 log.go:172] (0xc000502000) Reply frame received for 5\nI0519 21:35:21.458944 1422 log.go:172] (0xc000502000) Data frame received for 5\nI0519 21:35:21.458967 1422 log.go:172] (0xc000a7a0a0) (5) Data frame handling\nI0519 21:35:21.458981 1422 log.go:172] (0xc000a7a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:35:21.498181 1422 log.go:172] (0xc000502000) Data frame received for 3\nI0519 21:35:21.498211 1422 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0519 21:35:21.498241 1422 log.go:172] (0xc000a7a000) (3) Data frame sent\nI0519 21:35:21.498439 1422 log.go:172] (0xc000502000) Data frame received for 5\nI0519 21:35:21.498474 1422 log.go:172] (0xc000a7a0a0) (5) Data frame handling\nI0519 21:35:21.498653 1422 log.go:172] (0xc000502000) Data frame received for 3\nI0519 21:35:21.498688 1422 log.go:172] (0xc000a7a000) (3) Data frame handling\nI0519 21:35:21.500861 1422 log.go:172] (0xc000502000) Data frame received for 1\nI0519 21:35:21.500880 1422 log.go:172] (0xc000615a40) (1) Data frame handling\nI0519 21:35:21.500895 1422 log.go:172] (0xc000615a40) (1) Data frame sent\nI0519 21:35:21.500918 1422 log.go:172] (0xc000502000) (0xc000615a40) Stream removed, broadcasting: 1\nI0519 21:35:21.500940 1422 log.go:172] (0xc000502000) Go away received\nI0519 21:35:21.501554 1422 log.go:172] (0xc000502000) (0xc000615a40) Stream removed, broadcasting: 1\nI0519 21:35:21.501585 1422 log.go:172] (0xc000502000) (0xc000a7a000) Stream removed, broadcasting: 3\nI0519 21:35:21.501596 1422 log.go:172] (0xc000502000) (0xc000a7a0a0) Stream removed, broadcasting: 5\n" May 19 21:35:21.506: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:35:21.506: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:35:21.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 21:35:31.514: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 21:35:31.514: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:35:31.527: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:31.527: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:31.527: INFO: May 19 21:35:31.527: INFO: StatefulSet ss has not reached scale 3, at 1 May 19 21:35:32.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996218418s May 19 21:35:33.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991959052s May 19 21:35:35.110: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.695579475s May 19 21:35:36.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.413757392s May 19 21:35:37.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.409339963s May 19 21:35:38.122: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.405648073s May 19 21:35:39.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.401285331s May 19 21:35:40.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.396521108s May 19 21:35:41.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 391.275223ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4462 May 19 21:35:42.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:35:42.386: INFO: stderr: "I0519 21:35:42.283674 1441 log.go:172] (0xc000aa4000) (0xc00075f5e0) Create stream\nI0519 21:35:42.283727 1441 log.go:172] (0xc000aa4000) (0xc00075f5e0) Stream added, broadcasting: 1\nI0519 21:35:42.286566 1441 log.go:172] (0xc000aa4000) Reply frame received for 1\nI0519 21:35:42.286609 1441 log.go:172] (0xc000aa4000) (0xc000b54000) Create stream\nI0519 21:35:42.286622 1441 log.go:172] (0xc000aa4000) (0xc000b54000) Stream added, broadcasting: 3\nI0519 21:35:42.287801 1441 log.go:172] (0xc000aa4000) Reply frame received for 3\nI0519 21:35:42.287830 1441 log.go:172] (0xc000aa4000) (0xc000a06000) Create stream\nI0519 21:35:42.287839 1441 log.go:172] (0xc000aa4000) (0xc000a06000) Stream added, broadcasting: 5\nI0519 21:35:42.288744 1441 log.go:172] (0xc000aa4000) Reply frame received for 5\nI0519 21:35:42.379893 1441 log.go:172] (0xc000aa4000) Data frame received for 3\nI0519 21:35:42.379942 1441 log.go:172] (0xc000b54000) (3) Data frame handling\nI0519 21:35:42.379956 1441 log.go:172] (0xc000b54000) (3) Data frame sent\nI0519 21:35:42.379966 1441 log.go:172] (0xc000aa4000) Data frame received for 3\nI0519 21:35:42.379973 1441 log.go:172] (0xc000b54000) (3) Data frame handling\nI0519 21:35:42.380003 1441 log.go:172] (0xc000aa4000) Data frame received for 5\nI0519 21:35:42.380014 1441 log.go:172] (0xc000a06000) (5) Data frame handling\nI0519 21:35:42.380033 1441 log.go:172] (0xc000a06000) (5) Data frame sent\nI0519 21:35:42.380043 1441 log.go:172] (0xc000aa4000) Data frame received for 5\nI0519 21:35:42.380048 1441 log.go:172] (0xc000a06000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:35:42.381560 1441 log.go:172] (0xc000aa4000) Data frame received for 1\nI0519 21:35:42.381580 1441 log.go:172] (0xc00075f5e0) (1) Data frame handling\nI0519 21:35:42.381594 1441 log.go:172] (0xc00075f5e0) (1) Data frame sent\nI0519 21:35:42.381839 1441 log.go:172] (0xc000aa4000) (0xc00075f5e0) Stream removed, broadcasting: 1\nI0519 21:35:42.381863 1441 log.go:172] (0xc000aa4000) Go away received\nI0519 21:35:42.382186 1441 log.go:172] (0xc000aa4000) (0xc00075f5e0) Stream removed, broadcasting: 1\nI0519 21:35:42.382207 1441 log.go:172] (0xc000aa4000) (0xc000b54000) Stream removed, broadcasting: 3\nI0519 21:35:42.382217 1441 log.go:172] (0xc000aa4000) (0xc000a06000) Stream removed, broadcasting: 5\n" May 19 21:35:42.386: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:35:42.386: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:35:42.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:35:42.625: INFO: stderr: "I0519 21:35:42.521532 1463 log.go:172] (0xc0000f31e0) (0xc000a9e000) Create stream\nI0519 21:35:42.521594 1463 log.go:172] (0xc0000f31e0) (0xc000a9e000) Stream added, broadcasting: 1\nI0519 21:35:42.524538 1463 log.go:172] (0xc0000f31e0) Reply frame received for 1\nI0519 21:35:42.524598 1463 log.go:172] (0xc0000f31e0) (0xc000695cc0) Create stream\nI0519 21:35:42.524618 1463 log.go:172] (0xc0000f31e0) (0xc000695cc0) Stream added, broadcasting: 3\nI0519 21:35:42.526145 1463 log.go:172] (0xc0000f31e0) Reply frame received for 3\nI0519 21:35:42.526207 1463 log.go:172] (0xc0000f31e0) (0xc000a9e0a0) Create stream\nI0519 21:35:42.526223 1463 log.go:172] (0xc0000f31e0) (0xc000a9e0a0) Stream added, broadcasting: 5\nI0519 21:35:42.527298 1463 log.go:172] (0xc0000f31e0) Reply frame received for 5\nI0519 21:35:42.596986 1463 log.go:172] (0xc0000f31e0) Data frame received for 5\nI0519 21:35:42.597013 1463 log.go:172] (0xc000a9e0a0) (5) Data frame handling\nI0519 21:35:42.597028 1463 log.go:172] (0xc000a9e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:35:42.615720 1463 log.go:172] (0xc0000f31e0) Data frame received for 3\nI0519 21:35:42.615752 1463 log.go:172] (0xc000695cc0) (3) Data frame handling\nI0519 21:35:42.615767 1463 log.go:172] (0xc000695cc0) (3) Data frame sent\nI0519 21:35:42.615822 1463 log.go:172] (0xc0000f31e0) Data frame received for 5\nI0519 21:35:42.615836 1463 log.go:172] (0xc000a9e0a0) (5) Data frame handling\nI0519 21:35:42.615846 1463 log.go:172] (0xc000a9e0a0) (5) Data frame sent\nI0519 21:35:42.615854 1463 log.go:172] (0xc0000f31e0) Data frame received for 5\nI0519 21:35:42.615861 1463 log.go:172] (0xc000a9e0a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0519 21:35:42.615879 1463 log.go:172] (0xc000a9e0a0) (5) Data frame sent\nI0519 21:35:42.615892 1463 log.go:172] (0xc0000f31e0) Data frame received for 5\nI0519 21:35:42.615898 1463 log.go:172] (0xc000a9e0a0) (5) Data frame handling\nI0519 21:35:42.615915 1463 log.go:172] (0xc0000f31e0) Data frame received for 3\nI0519 21:35:42.615923 1463 log.go:172] (0xc000695cc0) (3) Data frame handling\nI0519 21:35:42.619966 1463 log.go:172] (0xc0000f31e0) Data frame received for 1\nI0519 21:35:42.620004 1463 log.go:172] (0xc000a9e000) (1) Data frame handling\nI0519 21:35:42.620034 1463 log.go:172] (0xc000a9e000) (1) Data frame sent\nI0519 21:35:42.620057 1463 log.go:172] (0xc0000f31e0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0519 21:35:42.620091 1463 log.go:172] (0xc0000f31e0) Go away received\nI0519 21:35:42.620563 1463 log.go:172] (0xc0000f31e0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0519 21:35:42.620592 1463 log.go:172] (0xc0000f31e0) (0xc000695cc0) Stream removed, broadcasting: 3\nI0519 21:35:42.620605 1463 log.go:172] (0xc0000f31e0) (0xc000a9e0a0) Stream removed, broadcasting: 5\n" May 19 21:35:42.625: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:35:42.625: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:35:42.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:35:42.834: INFO: stderr: "I0519 21:35:42.749051 1485 log.go:172] (0xc000998a50) (0xc00062fa40) Create stream\nI0519 21:35:42.749104 1485 log.go:172] (0xc000998a50) (0xc00062fa40) Stream added, broadcasting: 1\nI0519 21:35:42.751306 1485 log.go:172] (0xc000998a50) Reply frame received for 1\nI0519 21:35:42.751340 1485 log.go:172] (0xc000998a50) (0xc00062fc20) Create stream\nI0519 21:35:42.751351 1485 log.go:172] (0xc000998a50) (0xc00062fc20) Stream added, broadcasting: 3\nI0519 21:35:42.752212 1485 log.go:172] (0xc000998a50) Reply frame received for 3\nI0519 21:35:42.752247 1485 log.go:172] (0xc000998a50) (0xc0005e6000) Create stream\nI0519 21:35:42.752261 1485 log.go:172] (0xc000998a50) (0xc0005e6000) Stream added, broadcasting: 5\nI0519 21:35:42.753070 1485 log.go:172] (0xc000998a50) Reply frame received for 5\nI0519 21:35:42.824986 1485 log.go:172] (0xc000998a50) Data frame received for 5\nI0519 21:35:42.825027 1485 log.go:172] (0xc0005e6000) (5) Data frame handling\nI0519 21:35:42.825041 1485 log.go:172] (0xc0005e6000) (5) Data frame sent\nI0519 21:35:42.825049 1485 log.go:172] (0xc000998a50) Data frame received for 5\nI0519 21:35:42.825061 1485 log.go:172] (0xc0005e6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0519 21:35:42.825083 1485 log.go:172] (0xc000998a50) Data frame received for 3\nI0519 21:35:42.825093 1485 log.go:172] (0xc00062fc20) (3) Data frame handling\nI0519 21:35:42.825251 1485 log.go:172] (0xc00062fc20) (3) Data frame sent\nI0519 21:35:42.825274 1485 log.go:172] (0xc000998a50) Data frame received for 3\nI0519 21:35:42.825284 1485 log.go:172] (0xc00062fc20) (3) Data frame handling\nI0519 21:35:42.826886 1485 log.go:172] (0xc000998a50) Data frame received for 1\nI0519 21:35:42.826909 1485 log.go:172] (0xc00062fa40) (1) Data frame handling\nI0519 21:35:42.826923 1485 log.go:172] (0xc00062fa40) (1) Data frame sent\nI0519 21:35:42.826939 1485 log.go:172] (0xc000998a50) (0xc00062fa40) Stream removed, broadcasting: 1\nI0519 21:35:42.827302 1485 log.go:172] (0xc000998a50) Go away received\nI0519 21:35:42.827388 1485 log.go:172] (0xc000998a50) (0xc00062fa40) Stream removed, broadcasting: 1\nI0519 21:35:42.827453 1485 log.go:172] (0xc000998a50) (0xc00062fc20) Stream removed, broadcasting: 3\nI0519 21:35:42.827473 1485 log.go:172] (0xc000998a50) (0xc0005e6000) Stream removed, broadcasting: 5\n" May 19 21:35:42.834: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:35:42.834: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:35:42.838: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:35:42.838: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:35:42.838: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 19 21:35:42.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:35:43.044: INFO: stderr: "I0519 21:35:42.970795 1507 log.go:172] (0xc0007b4b00) (0xc0007b0000) Create stream\nI0519 21:35:42.970865 1507 log.go:172] (0xc0007b4b00) (0xc0007b0000) Stream added, broadcasting: 1\nI0519 21:35:42.973859 1507 log.go:172] (0xc0007b4b00) Reply frame received for 1\nI0519 21:35:42.973906 1507 log.go:172] (0xc0007b4b00) (0xc000609ae0) Create stream\nI0519 21:35:42.973925 1507 log.go:172] (0xc0007b4b00) (0xc000609ae0) Stream added, broadcasting: 3\nI0519 21:35:42.975145 1507 log.go:172] (0xc0007b4b00) Reply frame received for 3\nI0519 21:35:42.975181 1507 log.go:172] (0xc0007b4b00) (0xc0005fa000) Create stream\nI0519 21:35:42.975203 1507 log.go:172] (0xc0007b4b00) (0xc0005fa000) Stream added, broadcasting: 5\nI0519 21:35:42.976476 1507 log.go:172] (0xc0007b4b00) Reply frame received for 5\nI0519 21:35:43.036783 1507 log.go:172] (0xc0007b4b00) Data frame received for 5\nI0519 21:35:43.036823 1507 log.go:172] (0xc0005fa000) (5) Data frame handling\nI0519 21:35:43.036843 1507 log.go:172] (0xc0005fa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:35:43.036867 1507 log.go:172] (0xc0007b4b00) Data frame received for 3\nI0519 21:35:43.036901 1507 log.go:172] (0xc000609ae0) (3) Data frame handling\nI0519 21:35:43.036937 1507 log.go:172] (0xc000609ae0) (3) Data frame sent\nI0519 21:35:43.037106 1507 log.go:172] (0xc0007b4b00) Data frame received for 3\nI0519 21:35:43.037372 1507 log.go:172] (0xc000609ae0) (3) Data frame handling\nI0519 21:35:43.037438 1507 log.go:172] (0xc0007b4b00) Data frame received for 5\nI0519 21:35:43.037467 1507 log.go:172] (0xc0005fa000) (5) Data frame handling\nI0519 21:35:43.038997 1507 log.go:172] (0xc0007b4b00) Data frame received for 1\nI0519 21:35:43.039025 1507 log.go:172] (0xc0007b0000) (1) Data frame handling\nI0519 21:35:43.039046 1507 log.go:172] (0xc0007b0000) (1) Data frame sent\nI0519 21:35:43.039090 1507 log.go:172] (0xc0007b4b00) (0xc0007b0000) Stream removed, broadcasting: 1\nI0519 21:35:43.039127 1507 log.go:172] (0xc0007b4b00) Go away received\nI0519 21:35:43.039719 1507 log.go:172] (0xc0007b4b00) (0xc0007b0000) Stream removed, broadcasting: 1\nI0519 21:35:43.039747 1507 log.go:172] (0xc0007b4b00) (0xc000609ae0) Stream removed, broadcasting: 3\nI0519 21:35:43.039765 1507 log.go:172] (0xc0007b4b00) (0xc0005fa000) Stream removed, broadcasting: 5\n" May 19 21:35:43.044: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:35:43.044: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:35:43.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:35:43.323: INFO: stderr: "I0519 21:35:43.175743 1530 log.go:172] (0xc000afa000) (0xc0005c2820) Create stream\nI0519 21:35:43.175814 1530 log.go:172] (0xc000afa000) (0xc0005c2820) Stream added, broadcasting: 1\nI0519 21:35:43.179528 1530 log.go:172] (0xc000afa000) Reply frame received for 1\nI0519 21:35:43.179583 1530 log.go:172] (0xc000afa000) (0xc00062dd60) Create stream\nI0519 21:35:43.179600 1530 log.go:172] (0xc000afa000) (0xc00062dd60) Stream added, broadcasting: 3\nI0519 21:35:43.180819 1530 log.go:172] (0xc000afa000) Reply frame received for 3\nI0519 21:35:43.180874 1530 log.go:172] (0xc000afa000) (0xc0004db5e0) Create stream\nI0519 21:35:43.180900 1530 log.go:172] (0xc000afa000) (0xc0004db5e0) Stream added, broadcasting: 5\nI0519 21:35:43.182415 1530 log.go:172] (0xc000afa000) Reply frame received for 5\nI0519 21:35:43.258792 1530 log.go:172] (0xc000afa000) Data frame received for 5\nI0519 21:35:43.258819 1530 log.go:172] (0xc0004db5e0) (5) Data frame handling\nI0519 21:35:43.258833 1530 log.go:172] (0xc0004db5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:35:43.317036 1530 log.go:172] (0xc000afa000) Data frame received for 3\nI0519 21:35:43.317092 1530 log.go:172] (0xc00062dd60) (3) Data frame handling\nI0519 21:35:43.317304 1530 log.go:172] (0xc000afa000) Data frame received for 5\nI0519 21:35:43.317327 1530 log.go:172] (0xc0004db5e0) (5) Data frame handling\nI0519 21:35:43.317387 1530 log.go:172] (0xc00062dd60) (3) Data frame sent\nI0519 21:35:43.317425 1530 log.go:172] (0xc000afa000) Data frame received for 3\nI0519 21:35:43.317462 1530 log.go:172] (0xc00062dd60) (3) Data frame handling\nI0519 21:35:43.319084 1530 log.go:172] (0xc000afa000) Data frame received for 1\nI0519 21:35:43.319149 1530 log.go:172] (0xc0005c2820) (1) Data frame handling\nI0519 21:35:43.319175 1530 log.go:172] (0xc0005c2820) (1) Data frame sent\nI0519 21:35:43.319188 1530 log.go:172] (0xc000afa000) (0xc0005c2820) Stream removed, broadcasting: 1\nI0519 21:35:43.319340 1530 log.go:172] (0xc000afa000) Go away received\nI0519 21:35:43.319533 1530 log.go:172] (0xc000afa000) (0xc0005c2820) Stream removed, broadcasting: 1\nI0519 21:35:43.319553 1530 log.go:172] (0xc000afa000) (0xc00062dd60) Stream removed, broadcasting: 3\nI0519 21:35:43.319563 1530 log.go:172] (0xc000afa000) (0xc0004db5e0) Stream removed, broadcasting: 5\n" May 19 21:35:43.323: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:35:43.323: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:35:43.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4462 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:35:43.568: INFO: stderr: "I0519 21:35:43.462181 1550 log.go:172] (0xc00099a000) (0xc00057a000) Create stream\nI0519 21:35:43.462254 1550 log.go:172] (0xc00099a000) (0xc00057a000) Stream added, broadcasting: 1\nI0519 21:35:43.465770 1550 log.go:172] (0xc00099a000) Reply frame received for 1\nI0519 21:35:43.465810 1550 log.go:172] (0xc00099a000) (0xc0004274a0) Create stream\nI0519 21:35:43.465819 1550 log.go:172] (0xc00099a000) (0xc0004274a0) Stream added, broadcasting: 3\nI0519 21:35:43.466670 1550 log.go:172] (0xc00099a000) Reply frame received for 3\nI0519 21:35:43.466700 1550 log.go:172] (0xc00099a000) (0xc00057a280) Create stream\nI0519 21:35:43.466709 1550 log.go:172] (0xc00099a000) (0xc00057a280) Stream added, broadcasting: 5\nI0519 21:35:43.467534 1550 log.go:172] (0xc00099a000) Reply frame received for 5\nI0519 21:35:43.526479 1550 log.go:172] (0xc00099a000) Data frame received for 5\nI0519 21:35:43.526514 1550 log.go:172] (0xc00057a280) (5) Data frame handling\nI0519 21:35:43.526543 1550 log.go:172] (0xc00057a280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:35:43.559988 1550 log.go:172] (0xc00099a000) Data frame received for 5\nI0519 21:35:43.560018 1550 log.go:172] (0xc00057a280) (5) Data frame handling\nI0519 21:35:43.560091 1550 log.go:172] (0xc00099a000) Data frame received for 3\nI0519 21:35:43.560150 1550 log.go:172] (0xc0004274a0) (3) Data frame handling\nI0519 21:35:43.560199 1550 log.go:172] (0xc0004274a0) (3) Data frame sent\nI0519 21:35:43.560331 1550 log.go:172] (0xc00099a000) Data frame received for 3\nI0519 21:35:43.560343 1550 log.go:172] (0xc0004274a0) (3) Data frame handling\nI0519 21:35:43.562181 1550 log.go:172] (0xc00099a000) Data frame received for 1\nI0519 21:35:43.562194 1550 log.go:172] (0xc00057a000) (1) Data frame handling\nI0519 21:35:43.562200 1550 log.go:172] (0xc00057a000) (1) Data frame sent\nI0519 21:35:43.562207 1550 log.go:172] (0xc00099a000) (0xc00057a000) Stream removed, broadcasting: 1\nI0519 21:35:43.562215 1550 log.go:172] (0xc00099a000) Go away received\nI0519 21:35:43.562750 1550 log.go:172] (0xc00099a000) (0xc00057a000) Stream removed, broadcasting: 1\nI0519 21:35:43.562773 1550 log.go:172] (0xc00099a000) (0xc0004274a0) Stream removed, broadcasting: 3\nI0519 21:35:43.562787 1550 log.go:172] (0xc00099a000) (0xc00057a280) Stream removed, broadcasting: 5\n" May 19 21:35:43.568: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:35:43.568: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:35:43.568: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:35:43.571: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 19 21:35:53.606: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 21:35:53.606: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 21:35:53.606: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 21:35:53.618: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:53.618: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:53.618: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:53.618: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:53.618: INFO: May 19 21:35:53.618: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:54.622: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:54.622: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:54.622: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:54.623: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:54.623: INFO: May 19 21:35:54.623: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:55.628: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:55.628: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:55.628: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:55.628: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:55.628: INFO: May 19 21:35:55.628: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:56.633: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:56.633: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:56.633: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:56.633: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:56.633: INFO: May 19 21:35:56.633: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:57.637: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:57.638: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:57.638: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:57.638: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:57.638: INFO: May 19 21:35:57.638: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:58.643: INFO: POD NODE PHASE GRACE CONDITIONS May 19 21:35:58.643: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:11 +0000 UTC }] May 19 21:35:58.643: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:58.643: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-19 21:35:31 +0000 UTC }] May 19 21:35:58.643: INFO: May 19 21:35:58.643: INFO: StatefulSet ss has not reached scale 0, at 3 May 19 21:35:59.648: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.96665741s May 19 21:36:00.651: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.962507142s May 19 21:36:01.655: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.958774566s May 19 21:36:02.658: INFO: Verifying statefulset ss doesn't scale past 0 for another 955.063981ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4462 May 19 21:36:03.661: INFO: Scaling statefulset ss to 0 May 19 21:36:03.667: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 21:36:03.668: INFO: Deleting all statefulset in ns statefulset-4462 May 19 21:36:03.670: INFO: Scaling statefulset ss to 0 May 19 21:36:03.696: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:36:03.698: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:36:03.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4462" for this suite. • [SLOW TEST:52.574 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":113,"skipped":2015,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:36:03.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2545, will wait for the garbage collector to delete the pods May 19 21:36:09.859: INFO: Deleting Job.batch foo took: 21.624061ms May 19 21:36:10.159: INFO: Terminating Job.batch foo pods took: 300.24374ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:36:49.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2545" for this suite. • [SLOW TEST:45.849 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":114,"skipped":2018,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:36:49.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 19 21:36:49.637: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 19 21:36:59.547: INFO: >>> kubeConfig: /root/.kube/config May 19 21:37:02.494: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:14.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2387" for this suite. • [SLOW TEST:24.494 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":115,"skipped":2037,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:14.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 19 21:37:14.171: INFO: Waiting up to 5m0s for pod "pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd" in namespace "emptydir-3944" to be "success or failure" May 19 21:37:14.183: INFO: Pod "pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.286112ms May 19 21:37:16.187: INFO: Pod "pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015444769s May 19 21:37:18.191: INFO: Pod "pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01991554s STEP: Saw pod success May 19 21:37:18.191: INFO: Pod "pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd" satisfied condition "success or failure" May 19 21:37:18.195: INFO: Trying to get logs from node jerma-worker pod pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd container test-container: STEP: delete the pod May 19 21:37:18.226: INFO: Waiting for pod pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd to disappear May 19 21:37:18.267: INFO: Pod pod-2227c24f-6f92-43e0-9b7c-c1eb7275d1bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:18.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3944" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:37:18.357: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 19 21:37:18.603: INFO: Number of nodes with available pods: 0 May 19 21:37:18.603: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 19 21:37:18.732: INFO: Number of nodes with available pods: 0 May 19 21:37:18.732: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:19.762: INFO: Number of nodes with available pods: 0 May 19 21:37:19.762: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:20.736: INFO: Number of nodes with available pods: 0 May 19 21:37:20.736: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:21.736: INFO: Number of nodes with available pods: 0 May 19 21:37:21.736: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:22.735: INFO: Number of nodes with available pods: 1 May 19 21:37:22.735: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 19 21:37:22.814: INFO: Number of nodes with available pods: 1 May 19 21:37:22.814: INFO: Number of running nodes: 0, number of available pods: 1 May 19 21:37:23.819: INFO: Number of nodes with available pods: 0 May 19 21:37:23.819: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 19 21:37:23.828: INFO: Number of nodes with available pods: 0 May 19 21:37:23.828: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:24.858: INFO: Number of nodes with available pods: 0 May 19 21:37:24.858: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:25.831: INFO: Number of nodes with available pods: 0 May 19 21:37:25.831: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:26.833: INFO: Number of nodes with available pods: 0 May 19 21:37:26.833: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:27.841: INFO: Number of nodes with available pods: 0 May 19 21:37:27.841: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:28.833: INFO: Number of nodes with available pods: 0 May 19 21:37:28.833: INFO: Node jerma-worker2 is running more than one daemon pod May 19 21:37:29.833: INFO: Number of nodes with available pods: 1 May 19 21:37:29.833: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-936, will wait for the garbage collector to delete the pods May 19 21:37:29.899: INFO: Deleting DaemonSet.extensions daemon-set took: 7.149157ms May 19 21:37:30.199: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.255562ms May 19 21:37:33.903: INFO: Number of nodes with available pods: 0 May 19 21:37:33.903: INFO: Number of running nodes: 0, number of available pods: 0 May 19 21:37:33.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-936/daemonsets","resourceVersion":"17534009"},"items":null} May 19 21:37:33.908: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-936/pods","resourceVersion":"17534009"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:33.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-936" for this suite. • [SLOW TEST:15.690 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":117,"skipped":2096,"failed":0} [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:33.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 19 21:37:38.072: INFO: &Pod{ObjectMeta:{send-events-39737e40-ea5f-4db7-bcc7-e5f3a0105b1a events-2234 /api/v1/namespaces/events-2234/pods/send-events-39737e40-ea5f-4db7-bcc7-e5f3a0105b1a f84c2eaa-fcd8-4258-ba86-1b48a368d5f1 17534030 0 2020-05-19 21:37:34 +0000 UTC map[name:foo time:997943905] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktbqp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktbqp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktbqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:37:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:37:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:37:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 21:37:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.5,StartTime:2020-05-19 21:37:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 21:37:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d2240ac629eec0bb637e507f0bd37acaf59e9388ce102b52eb7e09127be6a2d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 19 21:37:40.079: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 19 21:37:42.082: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:42.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2234" for this suite. • [SLOW TEST:8.129 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":118,"skipped":2096,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:42.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-cce800b7-2599-418c-9493-5b008e9e0dc9 STEP: Creating a pod to test consume secrets May 19 21:37:42.207: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465" in namespace "projected-5166" to be "success or failure" May 19 21:37:42.214: INFO: Pod "pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008565ms May 19 21:37:44.218: INFO: Pod "pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010617247s May 19 21:37:46.250: INFO: Pod "pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042439703s STEP: Saw pod success May 19 21:37:46.250: INFO: Pod "pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465" satisfied condition "success or failure" May 19 21:37:46.253: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465 container projected-secret-volume-test: STEP: delete the pod May 19 21:37:46.332: INFO: Waiting for pod pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465 to disappear May 19 21:37:46.361: INFO: Pod pod-projected-secrets-971b2ed1-0684-4798-8f32-ceb992dec465 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:46.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5166" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2100,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:46.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:50.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8560" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2109,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:50.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 21:37:50.574: INFO: Waiting up to 5m0s for pod "pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3" in namespace "emptydir-4900" to be "success or failure" May 19 21:37:50.594: INFO: Pod "pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.174642ms May 19 21:37:52.597: INFO: Pod "pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023133658s May 19 21:37:54.602: INFO: Pod "pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027704599s STEP: Saw pod success May 19 21:37:54.602: INFO: Pod "pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3" satisfied condition "success or failure" May 19 21:37:54.605: INFO: Trying to get logs from node jerma-worker2 pod pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3 container test-container: STEP: delete the pod May 19 21:37:54.627: INFO: Waiting for pod pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3 to disappear May 19 21:37:54.631: INFO: Pod pod-e0adfac8-7c09-4c7c-af2e-a3aa8977e6e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:54.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4900" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2115,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:54.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:37:54.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47" in namespace "downward-api-2579" to be "success or failure" May 19 21:37:54.756: INFO: Pod "downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47": Phase="Pending", Reason="", readiness=false. Elapsed: 20.389124ms May 19 21:37:56.760: INFO: Pod "downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024613487s May 19 21:37:58.765: INFO: Pod "downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029129614s STEP: Saw pod success May 19 21:37:58.765: INFO: Pod "downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47" satisfied condition "success or failure" May 19 21:37:58.767: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47 container client-container: STEP: delete the pod May 19 21:37:58.801: INFO: Waiting for pod downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47 to disappear May 19 21:37:58.807: INFO: Pod downwardapi-volume-d4c1a056-5aec-45e0-8905-64159aad6a47 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:37:58.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2579" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2120,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:37:58.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:38:58.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7330" for this suite. • [SLOW TEST:60.079 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:38:58.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-387d06c7-e443-4f56-9c14-42bd540067fc in namespace container-probe-9596 May 19 21:39:03.027: INFO: Started pod liveness-387d06c7-e443-4f56-9c14-42bd540067fc in namespace container-probe-9596 STEP: checking the pod's current state and verifying that restartCount is present May 19 21:39:03.030: INFO: Initial restart count of pod liveness-387d06c7-e443-4f56-9c14-42bd540067fc is 0 May 19 21:39:19.102: INFO: Restart count of pod container-probe-9596/liveness-387d06c7-e443-4f56-9c14-42bd540067fc is now 1 (16.072298568s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:39:19.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9596" for this suite. • [SLOW TEST:20.277 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2190,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:39:19.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:39:20.175: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:39:22.204: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:39:24.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521160, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:39:27.235: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:39:27.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7334" for this suite. STEP: Destroying namespace "webhook-7334-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.332 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":125,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:39:27.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:39:27.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3686" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":126,"skipped":2215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:39:27.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 21:39:27.740: INFO: Waiting up to 5m0s for pod "pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31" in namespace "emptydir-8877" to be "success or failure" May 19 21:39:27.743: INFO: Pod "pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387217ms May 19 21:39:29.747: INFO: Pod "pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007483915s May 19 21:39:31.752: INFO: Pod "pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01197599s STEP: Saw pod success May 19 21:39:31.752: INFO: Pod "pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31" satisfied condition "success or failure" May 19 21:39:31.755: INFO: Trying to get logs from node jerma-worker pod pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31 container test-container: STEP: delete the pod May 19 21:39:31.818: INFO: Waiting for pod pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31 to disappear May 19 21:39:31.828: INFO: Pod pod-25fb7c57-2057-4b4c-98e5-6fcabab52f31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:39:31.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8877" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:39:31.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-363/configmap-test-3d17e111-6ca3-41e5-9b84-48910d9a0eee STEP: Creating a pod to test consume configMaps May 19 21:39:31.937: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92" in namespace "configmap-363" to be "success or failure" May 19 21:39:31.971: INFO: Pod "pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92": Phase="Pending", Reason="", readiness=false. Elapsed: 33.917932ms May 19 21:39:33.975: INFO: Pod "pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037499518s May 19 21:39:35.980: INFO: Pod "pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042104328s STEP: Saw pod success May 19 21:39:35.980: INFO: Pod "pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92" satisfied condition "success or failure" May 19 21:39:35.983: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92 container env-test: STEP: delete the pod May 19 21:39:36.006: INFO: Waiting for pod pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92 to disappear May 19 21:39:36.010: INFO: Pod pod-configmaps-0bc4720c-a2a9-41bb-af89-4425975bad92 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:39:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-363" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2281,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:39:36.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2791 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2791 May 19 21:39:36.120: INFO: Found 0 stateful pods, waiting for 1 May 19 21:39:46.125: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 21:39:46.155: INFO: Deleting all statefulset in ns statefulset-2791 May 19 21:39:46.161: INFO: Scaling statefulset ss to 0 May 19 21:40:16.245: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:40:16.249: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:16.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2791" for this suite. • [SLOW TEST:40.275 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":129,"skipped":2286,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:16.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 19 21:40:16.348: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 19 21:40:25.423: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:25.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9325" for this suite. • [SLOW TEST:9.142 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2306,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:25.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:25.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4277" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2319,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:25.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 19 21:40:25.702: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1154 /api/v1/namespaces/watch-1154/configmaps/e2e-watch-test-resource-version aa2edd35-eff7-4e71-b173-9c89b62915fd 17534974 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 21:40:25.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1154 /api/v1/namespaces/watch-1154/configmaps/e2e-watch-test-resource-version aa2edd35-eff7-4e71-b173-9c89b62915fd 17534975 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1154" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":132,"skipped":2322,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:25.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 19 21:40:25.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5413 /api/v1/namespaces/watch-5413/configmaps/e2e-watch-test-watch-closed aefbbdb1-312c-4b1e-b766-c9ecb8d9799e 17534981 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 21:40:25.797: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5413 /api/v1/namespaces/watch-5413/configmaps/e2e-watch-test-watch-closed aefbbdb1-312c-4b1e-b766-c9ecb8d9799e 17534982 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 19 21:40:25.849: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5413 /api/v1/namespaces/watch-5413/configmaps/e2e-watch-test-watch-closed aefbbdb1-312c-4b1e-b766-c9ecb8d9799e 17534983 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 21:40:25.849: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5413 /api/v1/namespaces/watch-5413/configmaps/e2e-watch-test-watch-closed aefbbdb1-312c-4b1e-b766-c9ecb8d9799e 17534984 0 2020-05-19 21:40:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:25.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5413" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":133,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:25.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 19 21:40:25.956: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 19 21:40:25.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:31.196: INFO: stderr: "" May 19 21:40:31.196: INFO: stdout: "service/agnhost-slave created\n" May 19 21:40:31.197: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 19 21:40:31.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:34.817: INFO: stderr: "" May 19 21:40:34.817: INFO: stdout: "service/agnhost-master created\n" May 19 21:40:34.817: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 19 21:40:34.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:37.968: INFO: stderr: "" May 19 21:40:37.968: INFO: stdout: "service/frontend created\n" May 19 21:40:37.968: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 19 21:40:37.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:40.062: INFO: stderr: "" May 19 21:40:40.062: INFO: stdout: "deployment.apps/frontend created\n" May 19 21:40:40.062: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 21:40:40.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:42.538: INFO: stderr: "" May 19 21:40:42.538: INFO: stdout: "deployment.apps/agnhost-master created\n" May 19 21:40:42.538: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 21:40:42.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8868' May 19 21:40:44.869: INFO: stderr: "" May 19 21:40:44.869: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 19 21:40:44.869: INFO: Waiting for all frontend pods to be Running. May 19 21:40:49.919: INFO: Waiting for frontend to serve content. May 19 21:40:50.956: INFO: Trying to add a new entry to the guestbook. May 19 21:40:50.968: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 19 21:40:50.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.185: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.185: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 19 21:40:51.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.335: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 21:40:51.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.466: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.466: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 21:40:51.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.567: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.567: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 21:40:51.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.663: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.663: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 21:40:51.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8868' May 19 21:40:51.797: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:40:51.797: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:51.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8868" for this suite. • [SLOW TEST:25.934 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":134,"skipped":2354,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:51.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:40:51.910: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:54.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5673" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":135,"skipped":2368,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:54.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-efb42227-4a60-4ef4-8fb0-c1e6f66775bb STEP: Creating a pod to test consume secrets May 19 21:40:54.806: INFO: Waiting up to 5m0s for pod "pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002" in namespace "secrets-2845" to be "success or failure" May 19 21:40:54.813: INFO: Pod "pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002": Phase="Pending", Reason="", readiness=false. Elapsed: 6.970471ms May 19 21:40:56.817: INFO: Pod "pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01056297s May 19 21:40:58.821: INFO: Pod "pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015320278s STEP: Saw pod success May 19 21:40:58.821: INFO: Pod "pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002" satisfied condition "success or failure" May 19 21:40:58.824: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002 container secret-volume-test: STEP: delete the pod May 19 21:40:58.848: INFO: Waiting for pod pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002 to disappear May 19 21:40:58.852: INFO: Pod pod-secrets-96323e1f-3e36-4019-b502-b1f534bd8002 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:40:58.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2845" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:40:58.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:41:04.973: INFO: DNS probes using dns-test-be6a878b-0a1d-4a02-8fee-b2a4bd7bd149 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:41:13.114: INFO: File wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:13.116: INFO: File jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:13.116: INFO: Lookups using dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 failed for: [wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local] May 19 21:41:18.125: INFO: File wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:18.128: INFO: File jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:18.128: INFO: Lookups using dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 failed for: [wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local] May 19 21:41:23.122: INFO: File wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:23.126: INFO: File jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:23.126: INFO: Lookups using dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 failed for: [wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local] May 19 21:41:28.121: INFO: File wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:28.125: INFO: File jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:28.125: INFO: Lookups using dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 failed for: [wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local] May 19 21:41:33.122: INFO: File wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:33.127: INFO: File jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local from pod dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 21:41:33.127: INFO: Lookups using dns-8163/dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 failed for: [wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local] May 19 21:41:38.159: INFO: DNS probes using dns-test-02be1b91-a9ad-4891-83ff-9ce2532273c7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8163.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8163.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 21:41:47.433: INFO: DNS probes using dns-test-cc8282bb-5fc7-455d-aa0b-2ba4816e7b7c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:41:47.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8163" for this suite. • [SLOW TEST:48.973 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":137,"skipped":2449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:41:47.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:41:48.680: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:41:50.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:41:52.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521308, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:41:55.722: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 19 21:41:55.757: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:41:55.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7299" for this suite. STEP: Destroying namespace "webhook-7299-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.049 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":138,"skipped":2481,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:41:55.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:42:00.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8140" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":139,"skipped":2496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:42:00.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:42:01.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:42:03.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521321, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521321, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521321, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521321, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:42:06.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 19 21:42:10.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6814 to-be-attached-pod -i -c=container1' May 19 21:42:10.563: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:42:10.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6814" for this suite. STEP: Destroying namespace "webhook-6814-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.032 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":140,"skipped":2528,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:42:10.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:42:14.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2298" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:42:14.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:42:19.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6023" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:42:19.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:42:19.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e" in namespace "downward-api-1564" to be "success or failure" May 19 21:42:19.460: INFO: Pod "downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529096ms May 19 21:42:21.515: INFO: Pod "downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058248714s May 19 21:42:23.522: INFO: Pod "downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065687249s STEP: Saw pod success May 19 21:42:23.522: INFO: Pod "downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e" satisfied condition "success or failure" May 19 21:42:23.525: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e container client-container: STEP: delete the pod May 19 21:42:23.558: INFO: Waiting for pod downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e to disappear May 19 21:42:23.592: INFO: Pod downwardapi-volume-7045e28e-29c8-40e8-bd72-1032dad5040e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:42:23.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1564" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:42:23.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 19 21:42:23.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536011 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 21:42:23.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536011 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 19 21:42:33.679: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536067 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 19 21:42:33.679: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536067 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 19 21:42:43.689: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536097 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 21:42:43.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536097 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 19 21:42:53.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536127 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 19 21:42:53.698: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-a 380930b6-db3e-4511-9821-3af9dd6f6583 17536127 0 2020-05-19 21:42:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 19 21:43:03.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-b 0df43e2f-e571-43fc-bc45-11244a7e5694 17536157 0 2020-05-19 21:43:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 21:43:03.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-b 0df43e2f-e571-43fc-bc45-11244a7e5694 17536157 0 2020-05-19 21:43:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 19 21:43:13.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-b 0df43e2f-e571-43fc-bc45-11244a7e5694 17536187 0 2020-05-19 21:43:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 19 21:43:13.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4067 /api/v1/namespaces/watch-4067/configmaps/e2e-watch-test-configmap-b 0df43e2f-e571-43fc-bc45-11244a7e5694 17536187 0 2020-05-19 21:43:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:43:23.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4067" for this suite. • [SLOW TEST:60.123 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":144,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:43:23.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 21:43:31.984: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 21:43:31.987: INFO: Pod pod-with-prestop-exec-hook still exists May 19 21:43:33.987: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 21:43:33.992: INFO: Pod pod-with-prestop-exec-hook still exists May 19 21:43:35.987: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 21:43:35.992: INFO: Pod pod-with-prestop-exec-hook still exists May 19 21:43:37.987: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 21:43:37.992: INFO: Pod pod-with-prestop-exec-hook still exists May 19 21:43:39.987: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 21:43:39.992: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:43:39.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7123" for this suite. • [SLOW TEST:16.282 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:43:40.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 19 21:43:40.947: INFO: Pod name wrapped-volume-race-8ad67745-bcc7-495a-8e44-70d6fcd80930: Found 0 pods out of 5 May 19 21:43:46.101: INFO: Pod name wrapped-volume-race-8ad67745-bcc7-495a-8e44-70d6fcd80930: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8ad67745-bcc7-495a-8e44-70d6fcd80930 in namespace emptydir-wrapper-6014, will wait for the garbage collector to delete the pods May 19 21:43:58.279: INFO: Deleting ReplicationController wrapped-volume-race-8ad67745-bcc7-495a-8e44-70d6fcd80930 took: 21.008109ms May 19 21:43:58.680: INFO: Terminating ReplicationController wrapped-volume-race-8ad67745-bcc7-495a-8e44-70d6fcd80930 pods took: 400.325207ms STEP: Creating RC which spawns configmap-volume pods May 19 21:44:09.714: INFO: Pod name wrapped-volume-race-f8c23ab6-8b5b-446e-a7b3-63415c8a54e0: Found 0 pods out of 5 May 19 21:44:14.720: INFO: Pod name wrapped-volume-race-f8c23ab6-8b5b-446e-a7b3-63415c8a54e0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f8c23ab6-8b5b-446e-a7b3-63415c8a54e0 in namespace emptydir-wrapper-6014, will wait for the garbage collector to delete the pods May 19 21:44:28.802: INFO: Deleting ReplicationController wrapped-volume-race-f8c23ab6-8b5b-446e-a7b3-63415c8a54e0 took: 8.168673ms May 19 21:44:29.202: INFO: Terminating ReplicationController wrapped-volume-race-f8c23ab6-8b5b-446e-a7b3-63415c8a54e0 pods took: 400.257718ms STEP: Creating RC which spawns configmap-volume pods May 19 21:44:39.367: INFO: Pod name wrapped-volume-race-62bf6ca8-c2bb-4edc-bcfc-7cdeb65e18a2: Found 0 pods out of 5 May 19 21:44:44.374: INFO: Pod name wrapped-volume-race-62bf6ca8-c2bb-4edc-bcfc-7cdeb65e18a2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-62bf6ca8-c2bb-4edc-bcfc-7cdeb65e18a2 in namespace emptydir-wrapper-6014, will wait for the garbage collector to delete the pods May 19 21:45:00.464: INFO: Deleting ReplicationController wrapped-volume-race-62bf6ca8-c2bb-4edc-bcfc-7cdeb65e18a2 took: 7.709167ms May 19 21:45:00.864: INFO: Terminating ReplicationController wrapped-volume-race-62bf6ca8-c2bb-4edc-bcfc-7cdeb65e18a2 pods took: 400.263371ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:10.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6014" for this suite. • [SLOW TEST:90.832 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":146,"skipped":2798,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:10.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 19 21:45:11.498: INFO: created pod pod-service-account-defaultsa May 19 21:45:11.498: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 19 21:45:11.506: INFO: created pod pod-service-account-mountsa May 19 21:45:11.506: INFO: pod pod-service-account-mountsa service account token volume mount: true May 19 21:45:11.547: INFO: created pod pod-service-account-nomountsa May 19 21:45:11.547: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 19 21:45:11.554: INFO: created pod pod-service-account-defaultsa-mountspec May 19 21:45:11.554: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 19 21:45:11.607: INFO: created pod pod-service-account-mountsa-mountspec May 19 21:45:11.607: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 19 21:45:11.620: INFO: created pod pod-service-account-nomountsa-mountspec May 19 21:45:11.620: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 19 21:45:11.649: INFO: created pod pod-service-account-defaultsa-nomountspec May 19 21:45:11.649: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 19 21:45:11.662: INFO: created pod pod-service-account-mountsa-nomountspec May 19 21:45:11.662: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 19 21:45:11.699: INFO: created pod pod-service-account-nomountsa-nomountspec May 19 21:45:11.699: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:11.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3810" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":147,"skipped":2802,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:11.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 21:45:26.224: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:26.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2017" for this suite. • [SLOW TEST:14.805 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2806,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:26.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-6805d7f3-3d96-4e19-b8ac-5117549e4ba4 STEP: Creating a pod to test consume configMaps May 19 21:45:27.226: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962" in namespace "projected-6428" to be "success or failure" May 19 21:45:27.346: INFO: Pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962": Phase="Pending", Reason="", readiness=false. Elapsed: 120.264393ms May 19 21:45:29.617: INFO: Pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.39139253s May 19 21:45:31.620: INFO: Pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962": Phase="Running", Reason="", readiness=true. Elapsed: 4.39413482s May 19 21:45:33.624: INFO: Pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39810865s STEP: Saw pod success May 19 21:45:33.624: INFO: Pod "pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962" satisfied condition "success or failure" May 19 21:45:33.626: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962 container projected-configmap-volume-test: STEP: delete the pod May 19 21:45:33.699: INFO: Waiting for pod pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962 to disappear May 19 21:45:33.710: INFO: Pod pod-projected-configmaps-a72ca16e-ebdd-42df-abcc-a1af4cb15962 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:33.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6428" for this suite. • [SLOW TEST:7.090 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2816,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:33.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 21:45:33.791: INFO: Waiting up to 5m0s for pod "pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9" in namespace "emptydir-3977" to be "success or failure" May 19 21:45:33.795: INFO: Pod "pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.818276ms May 19 21:45:35.799: INFO: Pod "pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008237361s May 19 21:45:37.802: INFO: Pod "pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011276844s STEP: Saw pod success May 19 21:45:37.802: INFO: Pod "pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9" satisfied condition "success or failure" May 19 21:45:37.804: INFO: Trying to get logs from node jerma-worker2 pod pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9 container test-container: STEP: delete the pod May 19 21:45:37.837: INFO: Waiting for pod pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9 to disappear May 19 21:45:37.870: INFO: Pod pod-c11e7983-e9e8-4db9-b035-23fd36ab3eb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3977" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2830,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:37.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:42.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4664" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":151,"skipped":2839,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:42.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 19 21:45:47.247: INFO: Successfully updated pod "labelsupdateb3fd636f-f7df-4225-a091-2fc97656e6ed" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:51.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3232" for this suite. • [SLOW TEST:9.112 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2846,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:51.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-51b5e131-13f8-4cbb-a4d9-8d4780780beb STEP: Creating a pod to test consume configMaps May 19 21:45:51.414: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd" in namespace "configmap-3032" to be "success or failure" May 19 21:45:51.418: INFO: Pod "pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108426ms May 19 21:45:53.421: INFO: Pod "pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007931114s May 19 21:45:55.425: INFO: Pod "pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011274205s STEP: Saw pod success May 19 21:45:55.425: INFO: Pod "pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd" satisfied condition "success or failure" May 19 21:45:55.427: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd container configmap-volume-test: STEP: delete the pod May 19 21:45:55.598: INFO: Waiting for pod pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd to disappear May 19 21:45:55.640: INFO: Pod pod-configmaps-e7ad27bb-85f4-4744-be16-dd0c89a322fd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3032" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2850,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:55.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0519 21:45:56.928976 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:45:56.929: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:45:56.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9137" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":154,"skipped":2856,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:45:56.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 19 21:45:57.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6725' May 19 21:46:00.369: INFO: stderr: "" May 19 21:46:00.369: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:46:00.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' May 19 21:46:00.494: INFO: stderr: "" May 19 21:46:00.494: INFO: stdout: "update-demo-nautilus-kt8k9 update-demo-nautilus-t6kcw " May 19 21:46:00.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kt8k9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:00.585: INFO: stderr: "" May 19 21:46:00.585: INFO: stdout: "" May 19 21:46:00.585: INFO: update-demo-nautilus-kt8k9 is created but not running May 19 21:46:05.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' May 19 21:46:05.683: INFO: stderr: "" May 19 21:46:05.683: INFO: stdout: "update-demo-nautilus-kt8k9 update-demo-nautilus-t6kcw " May 19 21:46:05.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kt8k9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:05.775: INFO: stderr: "" May 19 21:46:05.775: INFO: stdout: "true" May 19 21:46:05.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kt8k9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:05.877: INFO: stderr: "" May 19 21:46:05.877: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:46:05.877: INFO: validating pod update-demo-nautilus-kt8k9 May 19 21:46:05.882: INFO: got data: { "image": "nautilus.jpg" } May 19 21:46:05.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:46:05.882: INFO: update-demo-nautilus-kt8k9 is verified up and running May 19 21:46:05.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6kcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:05.979: INFO: stderr: "" May 19 21:46:05.979: INFO: stdout: "true" May 19 21:46:05.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t6kcw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:06.087: INFO: stderr: "" May 19 21:46:06.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:46:06.087: INFO: validating pod update-demo-nautilus-t6kcw May 19 21:46:06.099: INFO: got data: { "image": "nautilus.jpg" } May 19 21:46:06.099: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:46:06.099: INFO: update-demo-nautilus-t6kcw is verified up and running STEP: rolling-update to new replication controller May 19 21:46:06.101: INFO: scanned /root for discovery docs: May 19 21:46:06.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6725' May 19 21:46:31.238: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 19 21:46:31.238: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:46:31.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6725' May 19 21:46:31.337: INFO: stderr: "" May 19 21:46:31.337: INFO: stdout: "update-demo-kitten-dx5tk update-demo-kitten-s5l6j " May 19 21:46:31.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dx5tk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:31.432: INFO: stderr: "" May 19 21:46:31.432: INFO: stdout: "true" May 19 21:46:31.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dx5tk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:31.526: INFO: stderr: "" May 19 21:46:31.526: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 21:46:31.526: INFO: validating pod update-demo-kitten-dx5tk May 19 21:46:31.558: INFO: got data: { "image": "kitten.jpg" } May 19 21:46:31.558: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 21:46:31.558: INFO: update-demo-kitten-dx5tk is verified up and running May 19 21:46:31.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s5l6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:31.646: INFO: stderr: "" May 19 21:46:31.646: INFO: stdout: "true" May 19 21:46:31.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s5l6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6725' May 19 21:46:31.748: INFO: stderr: "" May 19 21:46:31.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 19 21:46:31.748: INFO: validating pod update-demo-kitten-s5l6j May 19 21:46:31.753: INFO: got data: { "image": "kitten.jpg" } May 19 21:46:31.753: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 19 21:46:31.753: INFO: update-demo-kitten-s5l6j is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:46:31.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6725" for this suite. • [SLOW TEST:34.824 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":155,"skipped":2878,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:46:31.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:46:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9890" for this suite. • [SLOW TEST:11.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":156,"skipped":2880,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:46:42.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:46:42.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4" in namespace "projected-8783" to be "success or failure" May 19 21:46:42.997: INFO: Pod "downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.354306ms May 19 21:46:45.000: INFO: Pod "downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020594785s May 19 21:46:47.004: INFO: Pod "downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024627061s STEP: Saw pod success May 19 21:46:47.004: INFO: Pod "downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4" satisfied condition "success or failure" May 19 21:46:47.008: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4 container client-container: STEP: delete the pod May 19 21:46:47.048: INFO: Waiting for pod downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4 to disappear May 19 21:46:47.060: INFO: Pod downwardapi-volume-ead072fa-aa1a-408e-9751-53b977af5ac4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:46:47.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8783" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:46:47.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 19 21:46:47.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6302 -- logs-generator --log-lines-total 100 --run-duration 20s' May 19 21:46:47.335: INFO: stderr: "" May 19 21:46:47.335: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 19 21:46:47.335: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 19 21:46:47.335: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6302" to be "running and ready, or succeeded" May 19 21:46:47.366: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 30.932713ms May 19 21:46:49.429: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09431986s May 19 21:46:51.433: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.098063589s May 19 21:46:51.433: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 19 21:46:51.433: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 19 21:46:51.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302' May 19 21:46:51.552: INFO: stderr: "" May 19 21:46:51.552: INFO: stdout: "I0519 21:46:49.926293 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/c8qs 426\nI0519 21:46:50.126461 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/pq5n 391\nI0519 21:46:50.326474 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/zz7 584\nI0519 21:46:50.526517 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/jcq 367\nI0519 21:46:50.726582 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/257 498\nI0519 21:46:50.926474 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/7dzt 559\nI0519 21:46:51.126434 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/xj7 577\nI0519 21:46:51.326468 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/zzzx 281\nI0519 21:46:51.526615 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/2jx 289\n" STEP: limiting log lines May 19 21:46:51.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302 --tail=1' May 19 21:46:51.679: INFO: stderr: "" May 19 21:46:51.679: INFO: stdout: "I0519 21:46:51.526615 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/2jx 289\n" May 19 21:46:51.679: INFO: got output "I0519 21:46:51.526615 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/2jx 289\n" STEP: limiting log bytes May 19 21:46:51.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302 --limit-bytes=1' May 19 21:46:51.792: INFO: stderr: "" May 19 21:46:51.792: INFO: stdout: "I" May 19 21:46:51.792: INFO: got output "I" STEP: exposing timestamps May 19 21:46:51.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302 --tail=1 --timestamps' May 19 21:46:51.908: INFO: stderr: "" May 19 21:46:51.908: INFO: stdout: "2020-05-19T21:46:51.726611239Z I0519 21:46:51.726442 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/f2xw 305\n" May 19 21:46:51.908: INFO: got output "2020-05-19T21:46:51.726611239Z I0519 21:46:51.726442 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/f2xw 305\n" STEP: restricting to a time range May 19 21:46:54.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302 --since=1s' May 19 21:46:54.517: INFO: stderr: "" May 19 21:46:54.517: INFO: stdout: "I0519 21:46:53.526553 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/v77 540\nI0519 21:46:53.726468 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/gk2 248\nI0519 21:46:53.926463 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/f5tx 348\nI0519 21:46:54.126478 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/np8r 303\nI0519 21:46:54.326487 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/9rm 578\n" May 19 21:46:54.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6302 --since=24h' May 19 21:46:54.639: INFO: stderr: "" May 19 21:46:54.639: INFO: stdout: "I0519 21:46:49.926293 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/c8qs 426\nI0519 21:46:50.126461 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/pq5n 391\nI0519 21:46:50.326474 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/zz7 584\nI0519 21:46:50.526517 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/jcq 367\nI0519 21:46:50.726582 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/257 498\nI0519 21:46:50.926474 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/7dzt 559\nI0519 21:46:51.126434 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/xj7 577\nI0519 21:46:51.326468 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/zzzx 281\nI0519 21:46:51.526615 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/2jx 289\nI0519 21:46:51.726442 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/f2xw 305\nI0519 21:46:51.926432 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/jp5 538\nI0519 21:46:52.126514 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/lr9 296\nI0519 21:46:52.326422 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/r6dh 222\nI0519 21:46:52.526443 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/zfmt 368\nI0519 21:46:52.726436 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/vwwn 351\nI0519 21:46:52.926448 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/pxvf 434\nI0519 21:46:53.126467 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/mdv 533\nI0519 21:46:53.326507 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/dkp5 397\nI0519 21:46:53.526553 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/v77 540\nI0519 21:46:53.726468 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/gk2 248\nI0519 21:46:53.926463 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/f5tx 348\nI0519 21:46:54.126478 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/np8r 303\nI0519 21:46:54.326487 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/9rm 578\nI0519 21:46:54.526450 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/ftc6 585\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 19 21:46:54.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6302' May 19 21:46:59.242: INFO: stderr: "" May 19 21:46:59.242: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:46:59.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6302" for this suite. • [SLOW TEST:12.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":158,"skipped":2907,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:46:59.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 19 21:46:59.348: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7050" to be "success or failure" May 19 21:46:59.386: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 38.206243ms May 19 21:47:01.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042651656s May 19 21:47:03.394: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046146398s May 19 21:47:05.398: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050171448s STEP: Saw pod success May 19 21:47:05.398: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 19 21:47:05.401: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 19 21:47:05.435: INFO: Waiting for pod pod-host-path-test to disappear May 19 21:47:05.477: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:47:05.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7050" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2921,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:47:05.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:47:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8960" for this suite. • [SLOW TEST:16.066 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":160,"skipped":2956,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:47:21.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 21:47:21.852: INFO: Waiting up to 5m0s for pod "pod-96438c4e-4c98-4bfe-a363-2d486822b487" in namespace "emptydir-5064" to be "success or failure" May 19 21:47:21.878: INFO: Pod "pod-96438c4e-4c98-4bfe-a363-2d486822b487": Phase="Pending", Reason="", readiness=false. Elapsed: 26.536993ms May 19 21:47:23.883: INFO: Pod "pod-96438c4e-4c98-4bfe-a363-2d486822b487": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030947578s May 19 21:47:25.887: INFO: Pod "pod-96438c4e-4c98-4bfe-a363-2d486822b487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035486853s STEP: Saw pod success May 19 21:47:25.887: INFO: Pod "pod-96438c4e-4c98-4bfe-a363-2d486822b487" satisfied condition "success or failure" May 19 21:47:25.891: INFO: Trying to get logs from node jerma-worker2 pod pod-96438c4e-4c98-4bfe-a363-2d486822b487 container test-container: STEP: delete the pod May 19 21:47:25.914: INFO: Waiting for pod pod-96438c4e-4c98-4bfe-a363-2d486822b487 to disappear May 19 21:47:25.945: INFO: Pod pod-96438c4e-4c98-4bfe-a363-2d486822b487 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:47:25.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5064" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2956,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:47:25.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1772 STEP: creating replication controller nodeport-test in namespace services-1772 I0519 21:47:26.083528 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1772, replica count: 2 I0519 21:47:29.137471 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 21:47:32.137680 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 21:47:32.137: INFO: Creating new exec pod May 19 21:47:37.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1772 execpodf658g -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 19 21:47:37.540: INFO: stderr: "I0519 21:47:37.307026 2295 log.go:172] (0xc000936b00) (0xc00056a3c0) Create stream\nI0519 21:47:37.307100 2295 log.go:172] (0xc000936b00) (0xc00056a3c0) Stream added, broadcasting: 1\nI0519 21:47:37.310371 2295 log.go:172] (0xc000936b00) Reply frame received for 1\nI0519 21:47:37.310424 2295 log.go:172] (0xc000936b00) (0xc00056a460) Create stream\nI0519 21:47:37.310441 2295 log.go:172] (0xc000936b00) (0xc00056a460) Stream added, broadcasting: 3\nI0519 21:47:37.311575 2295 log.go:172] (0xc000936b00) Reply frame received for 3\nI0519 21:47:37.311633 2295 log.go:172] (0xc000936b00) (0xc00056a500) Create stream\nI0519 21:47:37.311654 2295 log.go:172] (0xc000936b00) (0xc00056a500) Stream added, broadcasting: 5\nI0519 21:47:37.312794 2295 log.go:172] (0xc000936b00) Reply frame received for 5\nI0519 21:47:37.527839 2295 log.go:172] (0xc000936b00) Data frame received for 5\nI0519 21:47:37.527861 2295 log.go:172] (0xc00056a500) (5) Data frame handling\nI0519 21:47:37.527875 2295 log.go:172] (0xc00056a500) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0519 21:47:37.534540 2295 log.go:172] (0xc000936b00) Data frame received for 5\nI0519 21:47:37.534552 2295 log.go:172] (0xc00056a500) (5) Data frame handling\nI0519 21:47:37.534564 2295 log.go:172] (0xc00056a500) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0519 21:47:37.535125 2295 log.go:172] (0xc000936b00) Data frame received for 3\nI0519 21:47:37.535139 2295 log.go:172] (0xc00056a460) (3) Data frame handling\nI0519 21:47:37.535268 2295 log.go:172] (0xc000936b00) Data frame received for 5\nI0519 21:47:37.535295 2295 log.go:172] (0xc00056a500) (5) Data frame handling\nI0519 21:47:37.536641 2295 log.go:172] (0xc000936b00) Data frame received for 1\nI0519 21:47:37.536662 2295 log.go:172] (0xc00056a3c0) (1) Data frame handling\nI0519 21:47:37.536672 2295 log.go:172] (0xc00056a3c0) (1) Data frame sent\nI0519 21:47:37.536686 2295 log.go:172] (0xc000936b00) (0xc00056a3c0) Stream removed, broadcasting: 1\nI0519 21:47:37.536699 2295 log.go:172] (0xc000936b00) Go away received\nI0519 21:47:37.537097 2295 log.go:172] (0xc000936b00) (0xc00056a3c0) Stream removed, broadcasting: 1\nI0519 21:47:37.537266 2295 log.go:172] (0xc000936b00) (0xc00056a460) Stream removed, broadcasting: 3\nI0519 21:47:37.537282 2295 log.go:172] (0xc000936b00) (0xc00056a500) Stream removed, broadcasting: 5\n" May 19 21:47:37.540: INFO: stdout: "" May 19 21:47:37.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1772 execpodf658g -- /bin/sh -x -c nc -zv -t -w 2 10.103.39.13 80' May 19 21:47:37.732: INFO: stderr: "I0519 21:47:37.660619 2315 log.go:172] (0xc000646630) (0xc00074d5e0) Create stream\nI0519 21:47:37.660666 2315 log.go:172] (0xc000646630) (0xc00074d5e0) Stream added, broadcasting: 1\nI0519 21:47:37.663177 2315 log.go:172] (0xc000646630) Reply frame received for 1\nI0519 21:47:37.663221 2315 log.go:172] (0xc000646630) (0xc0009b8000) Create stream\nI0519 21:47:37.663232 2315 log.go:172] (0xc000646630) (0xc0009b8000) Stream added, broadcasting: 3\nI0519 21:47:37.664384 2315 log.go:172] (0xc000646630) Reply frame received for 3\nI0519 21:47:37.664424 2315 log.go:172] (0xc000646630) (0xc0009b80a0) Create stream\nI0519 21:47:37.664439 2315 log.go:172] (0xc000646630) (0xc0009b80a0) Stream added, broadcasting: 5\nI0519 21:47:37.665636 2315 log.go:172] (0xc000646630) Reply frame received for 5\nI0519 21:47:37.724759 2315 log.go:172] (0xc000646630) Data frame received for 5\nI0519 21:47:37.724806 2315 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0519 21:47:37.724823 2315 log.go:172] (0xc0009b80a0) (5) Data frame sent\nI0519 21:47:37.724836 2315 log.go:172] (0xc000646630) Data frame received for 5\nI0519 21:47:37.724845 2315 log.go:172] (0xc0009b80a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.39.13 80\nConnection to 10.103.39.13 80 port [tcp/http] succeeded!\nI0519 21:47:37.724871 2315 log.go:172] (0xc000646630) Data frame received for 3\nI0519 21:47:37.724894 2315 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0519 21:47:37.726787 2315 log.go:172] (0xc000646630) Data frame received for 1\nI0519 21:47:37.726833 2315 log.go:172] (0xc00074d5e0) (1) Data frame handling\nI0519 21:47:37.726855 2315 log.go:172] (0xc00074d5e0) (1) Data frame sent\nI0519 21:47:37.727002 2315 log.go:172] (0xc000646630) (0xc00074d5e0) Stream removed, broadcasting: 1\nI0519 21:47:37.727064 2315 log.go:172] (0xc000646630) Go away received\nI0519 21:47:37.727505 2315 log.go:172] (0xc000646630) (0xc00074d5e0) Stream removed, broadcasting: 1\nI0519 21:47:37.727538 2315 log.go:172] (0xc000646630) (0xc0009b8000) Stream removed, broadcasting: 3\nI0519 21:47:37.727549 2315 log.go:172] (0xc000646630) (0xc0009b80a0) Stream removed, broadcasting: 5\n" May 19 21:47:37.732: INFO: stdout: "" May 19 21:47:37.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1772 execpodf658g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32362' May 19 21:47:37.927: INFO: stderr: "I0519 21:47:37.865712 2336 log.go:172] (0xc000956000) (0xc000946140) Create stream\nI0519 21:47:37.865788 2336 log.go:172] (0xc000956000) (0xc000946140) Stream added, broadcasting: 1\nI0519 21:47:37.870538 2336 log.go:172] (0xc000956000) Reply frame received for 1\nI0519 21:47:37.870574 2336 log.go:172] (0xc000956000) (0xc0005fc780) Create stream\nI0519 21:47:37.870584 2336 log.go:172] (0xc000956000) (0xc0005fc780) Stream added, broadcasting: 3\nI0519 21:47:37.871460 2336 log.go:172] (0xc000956000) Reply frame received for 3\nI0519 21:47:37.871504 2336 log.go:172] (0xc000956000) (0xc0003cd540) Create stream\nI0519 21:47:37.871515 2336 log.go:172] (0xc000956000) (0xc0003cd540) Stream added, broadcasting: 5\nI0519 21:47:37.872398 2336 log.go:172] (0xc000956000) Reply frame received for 5\nI0519 21:47:37.920916 2336 log.go:172] (0xc000956000) Data frame received for 3\nI0519 21:47:37.920957 2336 log.go:172] (0xc0005fc780) (3) Data frame handling\nI0519 21:47:37.920999 2336 log.go:172] (0xc000956000) Data frame received for 5\nI0519 21:47:37.921016 2336 log.go:172] (0xc0003cd540) (5) Data frame handling\nI0519 21:47:37.921026 2336 log.go:172] (0xc0003cd540) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32362\nConnection to 172.17.0.10 32362 port [tcp/32362] succeeded!\nI0519 21:47:37.921097 2336 log.go:172] (0xc000956000) Data frame received for 5\nI0519 21:47:37.921346 2336 log.go:172] (0xc0003cd540) (5) Data frame handling\nI0519 21:47:37.923122 2336 log.go:172] (0xc000956000) Data frame received for 1\nI0519 21:47:37.923151 2336 log.go:172] (0xc000946140) (1) Data frame handling\nI0519 21:47:37.923173 2336 log.go:172] (0xc000946140) (1) Data frame sent\nI0519 21:47:37.923273 2336 log.go:172] (0xc000956000) (0xc000946140) Stream removed, broadcasting: 1\nI0519 21:47:37.923467 2336 log.go:172] (0xc000956000) Go away received\nI0519 21:47:37.923674 2336 log.go:172] (0xc000956000) (0xc000946140) Stream removed, broadcasting: 1\nI0519 21:47:37.923699 2336 log.go:172] (0xc000956000) (0xc0005fc780) Stream removed, broadcasting: 3\nI0519 21:47:37.923717 2336 log.go:172] (0xc000956000) (0xc0003cd540) Stream removed, broadcasting: 5\n" May 19 21:47:37.927: INFO: stdout: "" May 19 21:47:37.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1772 execpodf658g -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32362' May 19 21:47:38.147: INFO: stderr: "I0519 21:47:38.065098 2357 log.go:172] (0xc00057cf20) (0xc0006a5a40) Create stream\nI0519 21:47:38.065262 2357 log.go:172] (0xc00057cf20) (0xc0006a5a40) Stream added, broadcasting: 1\nI0519 21:47:38.067969 2357 log.go:172] (0xc00057cf20) Reply frame received for 1\nI0519 21:47:38.068017 2357 log.go:172] (0xc00057cf20) (0xc0009de000) Create stream\nI0519 21:47:38.068029 2357 log.go:172] (0xc00057cf20) (0xc0009de000) Stream added, broadcasting: 3\nI0519 21:47:38.069053 2357 log.go:172] (0xc00057cf20) Reply frame received for 3\nI0519 21:47:38.069101 2357 log.go:172] (0xc00057cf20) (0xc0006a5c20) Create stream\nI0519 21:47:38.069276 2357 log.go:172] (0xc00057cf20) (0xc0006a5c20) Stream added, broadcasting: 5\nI0519 21:47:38.070624 2357 log.go:172] (0xc00057cf20) Reply frame received for 5\nI0519 21:47:38.137527 2357 log.go:172] (0xc00057cf20) Data frame received for 5\nI0519 21:47:38.137567 2357 log.go:172] (0xc0006a5c20) (5) Data frame handling\nI0519 21:47:38.137594 2357 log.go:172] (0xc0006a5c20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32362\nI0519 21:47:38.137747 2357 log.go:172] (0xc00057cf20) Data frame received for 5\nI0519 21:47:38.137759 2357 log.go:172] (0xc0006a5c20) (5) Data frame handling\nI0519 21:47:38.137765 2357 log.go:172] (0xc0006a5c20) (5) Data frame sent\nConnection to 172.17.0.8 32362 port [tcp/32362] succeeded!\nI0519 21:47:38.138354 2357 log.go:172] (0xc00057cf20) Data frame received for 3\nI0519 21:47:38.138378 2357 log.go:172] (0xc0009de000) (3) Data frame handling\nI0519 21:47:38.138426 2357 log.go:172] (0xc00057cf20) Data frame received for 5\nI0519 21:47:38.138440 2357 log.go:172] (0xc0006a5c20) (5) Data frame handling\nI0519 21:47:38.140171 2357 log.go:172] (0xc00057cf20) Data frame received for 1\nI0519 21:47:38.140204 2357 log.go:172] (0xc0006a5a40) (1) Data frame handling\nI0519 21:47:38.140218 2357 log.go:172] (0xc0006a5a40) (1) Data frame sent\nI0519 21:47:38.140233 2357 log.go:172] (0xc00057cf20) (0xc0006a5a40) Stream removed, broadcasting: 1\nI0519 21:47:38.140438 2357 log.go:172] (0xc00057cf20) Go away received\nI0519 21:47:38.140573 2357 log.go:172] (0xc00057cf20) (0xc0006a5a40) Stream removed, broadcasting: 1\nI0519 21:47:38.140596 2357 log.go:172] (0xc00057cf20) (0xc0009de000) Stream removed, broadcasting: 3\nI0519 21:47:38.140611 2357 log.go:172] (0xc00057cf20) (0xc0006a5c20) Stream removed, broadcasting: 5\n" May 19 21:47:38.147: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:47:38.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1772" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.201 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":162,"skipped":2968,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:47:38.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6850 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 19 21:47:38.277: INFO: Found 0 stateful pods, waiting for 3 May 19 21:47:48.423: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:47:48.423: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:47:48.423: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 21:47:58.281: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:47:58.281: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:47:58.281: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 19 21:47:58.305: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 19 21:48:08.351: INFO: Updating stateful set ss2 May 19 21:48:08.395: INFO: Waiting for Pod statefulset-6850/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:48:18.402: INFO: Waiting for Pod statefulset-6850/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 19 21:48:28.972: INFO: Found 2 stateful pods, waiting for 3 May 19 21:48:38.977: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:48:38.977: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:48:38.977: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 19 21:48:39.002: INFO: Updating stateful set ss2 May 19 21:48:39.035: INFO: Waiting for Pod statefulset-6850/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 21:48:49.059: INFO: Updating stateful set ss2 May 19 21:48:49.081: INFO: Waiting for StatefulSet statefulset-6850/ss2 to complete update May 19 21:48:49.081: INFO: Waiting for Pod statefulset-6850/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 21:48:59.090: INFO: Deleting all statefulset in ns statefulset-6850 May 19 21:48:59.093: INFO: Scaling statefulset ss2 to 0 May 19 21:49:19.112: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:49:19.115: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:49:19.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6850" for this suite. • [SLOW TEST:100.987 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":163,"skipped":2969,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:49:19.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0519 21:49:49.768443 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 21:49:49.768: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:49:49.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2729" for this suite. • [SLOW TEST:30.633 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":164,"skipped":2989,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:49:49.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:49:49.937: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"10aa40e4-19d7-48bf-b7e6-954f44ff5ace", Controller:(*bool)(0xc00386090a), BlockOwnerDeletion:(*bool)(0xc00386090b)}} May 19 21:49:49.952: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d28e2741-1711-4459-93fa-700df7e830f2", Controller:(*bool)(0xc003b9f912), BlockOwnerDeletion:(*bool)(0xc003b9f913)}} May 19 21:49:50.031: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b0c328bb-e043-4b42-af54-a2259758ed09", Controller:(*bool)(0xc0039ea31a), BlockOwnerDeletion:(*bool)(0xc0039ea31b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:49:55.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3543" for this suite. • [SLOW TEST:5.651 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":165,"skipped":2992,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:49:55.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:49:55.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23" in namespace "downward-api-8801" to be "success or failure" May 19 21:49:55.929: INFO: Pod "downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23": Phase="Pending", Reason="", readiness=false. Elapsed: 34.655238ms May 19 21:49:57.932: INFO: Pod "downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038205796s May 19 21:49:59.937: INFO: Pod "downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043073421s STEP: Saw pod success May 19 21:49:59.937: INFO: Pod "downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23" satisfied condition "success or failure" May 19 21:49:59.940: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23 container client-container: STEP: delete the pod May 19 21:49:59.982: INFO: Waiting for pod downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23 to disappear May 19 21:49:59.988: INFO: Pod downwardapi-volume-11e34e5b-70ef-4003-aec0-6d6731feaf23 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:49:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8801" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2994,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:00.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:50:00.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:50:02.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:50:05.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3347" for this suite. STEP: Destroying namespace "webhook-3347-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.001 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":167,"skipped":2997,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:06.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:50:07.080: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:50:09.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521806, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 21:50:11.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521807, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521806, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:50:14.126: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:50:14.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:15.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5532" for this suite. STEP: Destroying namespace "webhook-5532-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.475 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":168,"skipped":3007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:15.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-a22f56b7-ebd9-4cb8-97c7-bafb3c9b80fc STEP: Creating a pod to test consume secrets May 19 21:50:15.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca" in namespace "projected-56" to be "success or failure" May 19 21:50:15.650: INFO: Pod "pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.602602ms May 19 21:50:17.780: INFO: Pod "pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152384346s May 19 21:50:19.784: INFO: Pod "pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156643865s STEP: Saw pod success May 19 21:50:19.784: INFO: Pod "pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca" satisfied condition "success or failure" May 19 21:50:19.787: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca container projected-secret-volume-test: STEP: delete the pod May 19 21:50:19.848: INFO: Waiting for pod pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca to disappear May 19 21:50:20.066: INFO: Pod pod-projected-secrets-6a10266c-61b0-4c01-874f-357100b350ca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:20.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-56" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3037,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:20.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 21:50:20.255: INFO: Waiting up to 5m0s for pod "pod-810a4225-b89b-454f-b295-5d20935ca9f4" in namespace "emptydir-9491" to be "success or failure" May 19 21:50:20.258: INFO: Pod "pod-810a4225-b89b-454f-b295-5d20935ca9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.759836ms May 19 21:50:22.343: INFO: Pod "pod-810a4225-b89b-454f-b295-5d20935ca9f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088176103s May 19 21:50:24.346: INFO: Pod "pod-810a4225-b89b-454f-b295-5d20935ca9f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091634321s STEP: Saw pod success May 19 21:50:24.346: INFO: Pod "pod-810a4225-b89b-454f-b295-5d20935ca9f4" satisfied condition "success or failure" May 19 21:50:24.349: INFO: Trying to get logs from node jerma-worker pod pod-810a4225-b89b-454f-b295-5d20935ca9f4 container test-container: STEP: delete the pod May 19 21:50:24.400: INFO: Waiting for pod pod-810a4225-b89b-454f-b295-5d20935ca9f4 to disappear May 19 21:50:24.415: INFO: Pod pod-810a4225-b89b-454f-b295-5d20935ca9f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:24.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9491" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":3048,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:24.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:30.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3622" for this suite. STEP: Destroying namespace "nsdeletetest-8907" for this suite. May 19 21:50:30.654: INFO: Namespace nsdeletetest-8907 was already deleted STEP: Destroying namespace "nsdeletetest-6137" for this suite. • [SLOW TEST:6.235 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":171,"skipped":3051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:30.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:50:30.698: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:34.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5376" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":3083,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:34.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:50:34.991: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ed92dc27-07a6-48c5-8287-2544dad17fd7" in namespace "security-context-test-8240" to be "success or failure" May 19 21:50:35.008: INFO: Pod "busybox-user-65534-ed92dc27-07a6-48c5-8287-2544dad17fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.501532ms May 19 21:50:37.012: INFO: Pod "busybox-user-65534-ed92dc27-07a6-48c5-8287-2544dad17fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021008691s May 19 21:50:39.016: INFO: Pod "busybox-user-65534-ed92dc27-07a6-48c5-8287-2544dad17fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025087561s May 19 21:50:39.016: INFO: Pod "busybox-user-65534-ed92dc27-07a6-48c5-8287-2544dad17fd7" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:39.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8240" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":3104,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:39.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 19 21:50:39.237: INFO: Waiting up to 5m0s for pod "downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff" in namespace "downward-api-9196" to be "success or failure" May 19 21:50:39.242: INFO: Pod "downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545396ms May 19 21:50:41.246: INFO: Pod "downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008975188s May 19 21:50:43.250: INFO: Pod "downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013171215s STEP: Saw pod success May 19 21:50:43.250: INFO: Pod "downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff" satisfied condition "success or failure" May 19 21:50:43.253: INFO: Trying to get logs from node jerma-worker pod downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff container dapi-container: STEP: delete the pod May 19 21:50:43.288: INFO: Waiting for pod downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff to disappear May 19 21:50:43.302: INFO: Pod downward-api-bf737291-aa1d-4c5c-9649-8fd312609dff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:43.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9196" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":3115,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:43.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-66025adc-16f9-4cf8-ba18-5901af557c34 STEP: Creating a pod to test consume configMaps May 19 21:50:43.384: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db" in namespace "projected-9706" to be "success or failure" May 19 21:50:43.398: INFO: Pod "pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db": Phase="Pending", Reason="", readiness=false. Elapsed: 13.466962ms May 19 21:50:45.402: INFO: Pod "pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018241209s May 19 21:50:47.406: INFO: Pod "pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022041771s STEP: Saw pod success May 19 21:50:47.406: INFO: Pod "pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db" satisfied condition "success or failure" May 19 21:50:47.408: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db container projected-configmap-volume-test: STEP: delete the pod May 19 21:50:47.422: INFO: Waiting for pod pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db to disappear May 19 21:50:47.427: INFO: Pod pod-projected-configmaps-5e22bafe-e7b5-4d19-b7eb-b88123c403db no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:47.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9706" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":3117,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:47.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:50:47.603: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 24.639507ms) May 19 21:50:47.608: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.969638ms) May 19 21:50:47.611: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.182288ms) May 19 21:50:47.613: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.353376ms) May 19 21:50:47.615: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.27843ms) May 19 21:50:47.619: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.009985ms) May 19 21:50:47.623: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.093297ms) May 19 21:50:47.643: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 19.924341ms) May 19 21:50:47.646: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.288157ms) May 19 21:50:47.649: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.00479ms) May 19 21:50:47.652: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.667811ms) May 19 21:50:47.654: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.635016ms) May 19 21:50:47.657: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.872767ms) May 19 21:50:47.660: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.586224ms) May 19 21:50:47.662: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.50682ms) May 19 21:50:47.665: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.596193ms) May 19 21:50:47.667: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.125403ms) May 19 21:50:47.669: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.167415ms) May 19 21:50:47.672: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.407385ms) May 19 21:50:47.675: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.15094ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:47.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6540" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":176,"skipped":3118,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:47.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 19 21:50:47.862: INFO: Waiting up to 5m0s for pod "downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4" in namespace "downward-api-4904" to be "success or failure" May 19 21:50:47.920: INFO: Pod "downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.034642ms May 19 21:50:49.924: INFO: Pod "downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062421245s May 19 21:50:51.929: INFO: Pod "downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067077604s STEP: Saw pod success May 19 21:50:51.929: INFO: Pod "downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4" satisfied condition "success or failure" May 19 21:50:51.932: INFO: Trying to get logs from node jerma-worker pod downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4 container dapi-container: STEP: delete the pod May 19 21:50:51.956: INFO: Waiting for pod downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4 to disappear May 19 21:50:51.974: INFO: Pod downward-api-a5275203-2ee8-41f8-ab6d-6586809408e4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:51.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4904" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":3133,"failed":0} SS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:51.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 19 21:50:52.089: INFO: Created pod &Pod{ObjectMeta:{dns-5567 dns-5567 /api/v1/namespaces/dns-5567/pods/dns-5567 8896534d-d9d2-4f07-a066-60319abe5892 17539851 0 2020-05-19 21:50:52 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wdl8d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wdl8d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wdl8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 19 21:50:56.103: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5567 PodName:dns-5567 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:50:56.103: INFO: >>> kubeConfig: /root/.kube/config I0519 21:50:56.138824 6 log.go:172] (0xc0012434a0) (0xc00225c640) Create stream I0519 21:50:56.138851 6 log.go:172] (0xc0012434a0) (0xc00225c640) Stream added, broadcasting: 1 I0519 21:50:56.141504 6 log.go:172] (0xc0012434a0) Reply frame received for 1 I0519 21:50:56.141543 6 log.go:172] (0xc0012434a0) (0xc00159be00) Create stream I0519 21:50:56.141556 6 log.go:172] (0xc0012434a0) (0xc00159be00) Stream added, broadcasting: 3 I0519 21:50:56.142761 6 log.go:172] (0xc0012434a0) Reply frame received for 3 I0519 21:50:56.142802 6 log.go:172] (0xc0012434a0) (0xc00225c960) Create stream I0519 21:50:56.142816 6 log.go:172] (0xc0012434a0) (0xc00225c960) Stream added, broadcasting: 5 I0519 21:50:56.143765 6 log.go:172] (0xc0012434a0) Reply frame received for 5 I0519 21:50:56.233812 6 log.go:172] (0xc0012434a0) Data frame received for 3 I0519 21:50:56.233844 6 log.go:172] (0xc00159be00) (3) Data frame handling I0519 21:50:56.233860 6 log.go:172] (0xc00159be00) (3) Data frame sent I0519 21:50:56.234583 6 log.go:172] (0xc0012434a0) Data frame received for 3 I0519 21:50:56.234629 6 log.go:172] (0xc00159be00) (3) Data frame handling I0519 21:50:56.234649 6 log.go:172] (0xc0012434a0) Data frame received for 5 I0519 21:50:56.234666 6 log.go:172] (0xc00225c960) (5) Data frame handling I0519 21:50:56.236321 6 log.go:172] (0xc0012434a0) Data frame received for 1 I0519 21:50:56.236350 6 log.go:172] (0xc00225c640) (1) Data frame handling I0519 21:50:56.236380 6 log.go:172] (0xc00225c640) (1) Data frame sent I0519 21:50:56.236401 6 log.go:172] (0xc0012434a0) (0xc00225c640) Stream removed, broadcasting: 1 I0519 21:50:56.236419 6 log.go:172] (0xc0012434a0) Go away received I0519 21:50:56.236542 6 log.go:172] (0xc0012434a0) (0xc00225c640) Stream removed, broadcasting: 1 I0519 21:50:56.236564 6 log.go:172] (0xc0012434a0) (0xc00159be00) Stream removed, broadcasting: 3 I0519 21:50:56.236574 6 log.go:172] (0xc0012434a0) (0xc00225c960) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 19 21:50:56.236: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5567 PodName:dns-5567 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:50:56.236: INFO: >>> kubeConfig: /root/.kube/config I0519 21:50:56.262853 6 log.go:172] (0xc00229e420) (0xc0013343c0) Create stream I0519 21:50:56.262888 6 log.go:172] (0xc00229e420) (0xc0013343c0) Stream added, broadcasting: 1 I0519 21:50:56.265487 6 log.go:172] (0xc00229e420) Reply frame received for 1 I0519 21:50:56.265521 6 log.go:172] (0xc00229e420) (0xc0023b40a0) Create stream I0519 21:50:56.265531 6 log.go:172] (0xc00229e420) (0xc0023b40a0) Stream added, broadcasting: 3 I0519 21:50:56.266502 6 log.go:172] (0xc00229e420) Reply frame received for 3 I0519 21:50:56.266562 6 log.go:172] (0xc00229e420) (0xc001334500) Create stream I0519 21:50:56.266581 6 log.go:172] (0xc00229e420) (0xc001334500) Stream added, broadcasting: 5 I0519 21:50:56.267608 6 log.go:172] (0xc00229e420) Reply frame received for 5 I0519 21:50:56.323823 6 log.go:172] (0xc00229e420) Data frame received for 3 I0519 21:50:56.323844 6 log.go:172] (0xc0023b40a0) (3) Data frame handling I0519 21:50:56.323856 6 log.go:172] (0xc0023b40a0) (3) Data frame sent I0519 21:50:56.326075 6 log.go:172] (0xc00229e420) Data frame received for 5 I0519 21:50:56.326111 6 log.go:172] (0xc001334500) (5) Data frame handling I0519 21:50:56.326157 6 log.go:172] (0xc00229e420) Data frame received for 3 I0519 21:50:56.326178 6 log.go:172] (0xc0023b40a0) (3) Data frame handling I0519 21:50:56.327932 6 log.go:172] (0xc00229e420) Data frame received for 1 I0519 21:50:56.327962 6 log.go:172] (0xc0013343c0) (1) Data frame handling I0519 21:50:56.327989 6 log.go:172] (0xc0013343c0) (1) Data frame sent I0519 21:50:56.328012 6 log.go:172] (0xc00229e420) (0xc0013343c0) Stream removed, broadcasting: 1 I0519 21:50:56.328042 6 log.go:172] (0xc00229e420) Go away received I0519 21:50:56.328141 6 log.go:172] (0xc00229e420) (0xc0013343c0) Stream removed, broadcasting: 1 I0519 21:50:56.328170 6 log.go:172] (0xc00229e420) (0xc0023b40a0) Stream removed, broadcasting: 3 I0519 21:50:56.328194 6 log.go:172] (0xc00229e420) (0xc001334500) Stream removed, broadcasting: 5 May 19 21:50:56.328: INFO: Deleting pod dns-5567... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:50:56.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5567" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":178,"skipped":3135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:50:56.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:50:57.415: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:50:59.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:51:02.495: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:02.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-739" for this suite. STEP: Destroying namespace "webhook-739-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.220 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":179,"skipped":3208,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:02.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:51:03.257: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 19 21:51:05.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521863, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521863, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521863, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521863, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:51:08.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:08.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8129" for this suite. STEP: Destroying namespace "webhook-8129-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.033 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":180,"skipped":3220,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:08.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 19 21:51:08.993: INFO: Waiting up to 5m0s for pod "client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4" in namespace "containers-7214" to be "success or failure" May 19 21:51:09.122: INFO: Pod "client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 129.126295ms May 19 21:51:11.126: INFO: Pod "client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133224417s May 19 21:51:13.131: INFO: Pod "client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137664834s STEP: Saw pod success May 19 21:51:13.131: INFO: Pod "client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4" satisfied condition "success or failure" May 19 21:51:13.134: INFO: Trying to get logs from node jerma-worker2 pod client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4 container test-container: STEP: delete the pod May 19 21:51:13.168: INFO: Waiting for pod client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4 to disappear May 19 21:51:13.171: INFO: Pod client-containers-5c998489-b542-4cf0-838e-6a32a5d8cee4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:13.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7214" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3223,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:13.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 19 21:51:13.463: INFO: >>> kubeConfig: /root/.kube/config May 19 21:51:16.411: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:26.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5974" for this suite. • [SLOW TEST:13.800 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":182,"skipped":3223,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:26.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 21:51:27.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f" in namespace "projected-8812" to be "success or failure" May 19 21:51:27.040: INFO: Pod "downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.537968ms May 19 21:51:29.045: INFO: Pod "downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009195933s May 19 21:51:31.100: INFO: Pod "downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063946189s STEP: Saw pod success May 19 21:51:31.100: INFO: Pod "downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f" satisfied condition "success or failure" May 19 21:51:31.230: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f container client-container: STEP: delete the pod May 19 21:51:31.282: INFO: Waiting for pod downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f to disappear May 19 21:51:31.309: INFO: Pod downwardapi-volume-fa92a1e6-aa4d-4568-b903-9c17271c050f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:31.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8812" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:31.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:51:31.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3773' May 19 21:51:34.980: INFO: stderr: "" May 19 21:51:34.980: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 19 21:51:34.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3773' May 19 21:51:49.232: INFO: stderr: "" May 19 21:51:49.232: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:51:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3773" for this suite. • [SLOW TEST:17.924 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":184,"skipped":3252,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:51:49.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 19 21:51:57.451: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 21:51:57.499: INFO: Pod pod-with-poststart-exec-hook still exists May 19 21:51:59.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 21:51:59.504: INFO: Pod pod-with-poststart-exec-hook still exists May 19 21:52:01.500: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 19 21:52:01.503: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:52:01.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6438" for this suite. • [SLOW TEST:12.270 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3255,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:52:01.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9965 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 21:52:01.622: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 21:52:25.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostname&protocol=http&host=10.244.1.66&port=8080&tries=1'] Namespace:pod-network-test-9965 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:52:25.717: INFO: >>> kubeConfig: /root/.kube/config I0519 21:52:25.748093 6 log.go:172] (0xc0051eeb00) (0xc001bf9540) Create stream I0519 21:52:25.748129 6 log.go:172] (0xc0051eeb00) (0xc001bf9540) Stream added, broadcasting: 1 I0519 21:52:25.749969 6 log.go:172] (0xc0051eeb00) Reply frame received for 1 I0519 21:52:25.750024 6 log.go:172] (0xc0051eeb00) (0xc001bf9860) Create stream I0519 21:52:25.750038 6 log.go:172] (0xc0051eeb00) (0xc001bf9860) Stream added, broadcasting: 3 I0519 21:52:25.750971 6 log.go:172] (0xc0051eeb00) Reply frame received for 3 I0519 21:52:25.751006 6 log.go:172] (0xc0051eeb00) (0xc001bf99a0) Create stream I0519 21:52:25.751018 6 log.go:172] (0xc0051eeb00) (0xc001bf99a0) Stream added, broadcasting: 5 I0519 21:52:25.751942 6 log.go:172] (0xc0051eeb00) Reply frame received for 5 I0519 21:52:25.824257 6 log.go:172] (0xc0051eeb00) Data frame received for 3 I0519 21:52:25.824283 6 log.go:172] (0xc001bf9860) (3) Data frame handling I0519 21:52:25.824306 6 log.go:172] (0xc001bf9860) (3) Data frame sent I0519 21:52:25.824684 6 log.go:172] (0xc0051eeb00) Data frame received for 3 I0519 21:52:25.824719 6 log.go:172] (0xc001bf9860) (3) Data frame handling I0519 21:52:25.824870 6 log.go:172] (0xc0051eeb00) Data frame received for 5 I0519 21:52:25.824894 6 log.go:172] (0xc001bf99a0) (5) Data frame handling I0519 21:52:25.826774 6 log.go:172] (0xc0051eeb00) Data frame received for 1 I0519 21:52:25.826827 6 log.go:172] (0xc001bf9540) (1) Data frame handling I0519 21:52:25.826866 6 log.go:172] (0xc001bf9540) (1) Data frame sent I0519 21:52:25.826895 6 log.go:172] (0xc0051eeb00) (0xc001bf9540) Stream removed, broadcasting: 1 I0519 21:52:25.826919 6 log.go:172] (0xc0051eeb00) Go away received I0519 21:52:25.827110 6 log.go:172] (0xc0051eeb00) (0xc001bf9540) Stream removed, broadcasting: 1 I0519 21:52:25.827123 6 log.go:172] (0xc0051eeb00) (0xc001bf9860) Stream removed, broadcasting: 3 I0519 21:52:25.827128 6 log.go:172] (0xc0051eeb00) (0xc001bf99a0) Stream removed, broadcasting: 5 May 19 21:52:25.827: INFO: Waiting for responses: map[] May 19 21:52:25.830: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.87:8080/dial?request=hostname&protocol=http&host=10.244.2.86&port=8080&tries=1'] Namespace:pod-network-test-9965 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 21:52:25.830: INFO: >>> kubeConfig: /root/.kube/config I0519 21:52:25.858690 6 log.go:172] (0xc00605a6e0) (0xc001d76320) Create stream I0519 21:52:25.858719 6 log.go:172] (0xc00605a6e0) (0xc001d76320) Stream added, broadcasting: 1 I0519 21:52:25.860354 6 log.go:172] (0xc00605a6e0) Reply frame received for 1 I0519 21:52:25.860390 6 log.go:172] (0xc00605a6e0) (0xc001600140) Create stream I0519 21:52:25.860405 6 log.go:172] (0xc00605a6e0) (0xc001600140) Stream added, broadcasting: 3 I0519 21:52:25.861718 6 log.go:172] (0xc00605a6e0) Reply frame received for 3 I0519 21:52:25.861751 6 log.go:172] (0xc00605a6e0) (0xc001600280) Create stream I0519 21:52:25.861761 6 log.go:172] (0xc00605a6e0) (0xc001600280) Stream added, broadcasting: 5 I0519 21:52:25.862667 6 log.go:172] (0xc00605a6e0) Reply frame received for 5 I0519 21:52:25.933476 6 log.go:172] (0xc00605a6e0) Data frame received for 3 I0519 21:52:25.933507 6 log.go:172] (0xc001600140) (3) Data frame handling I0519 21:52:25.933525 6 log.go:172] (0xc001600140) (3) Data frame sent I0519 21:52:25.934037 6 log.go:172] (0xc00605a6e0) Data frame received for 3 I0519 21:52:25.934054 6 log.go:172] (0xc001600140) (3) Data frame handling I0519 21:52:25.934268 6 log.go:172] (0xc00605a6e0) Data frame received for 5 I0519 21:52:25.934289 6 log.go:172] (0xc001600280) (5) Data frame handling I0519 21:52:25.935800 6 log.go:172] (0xc00605a6e0) Data frame received for 1 I0519 21:52:25.935823 6 log.go:172] (0xc001d76320) (1) Data frame handling I0519 21:52:25.935847 6 log.go:172] (0xc001d76320) (1) Data frame sent I0519 21:52:25.935867 6 log.go:172] (0xc00605a6e0) (0xc001d76320) Stream removed, broadcasting: 1 I0519 21:52:25.935882 6 log.go:172] (0xc00605a6e0) Go away received I0519 21:52:25.935997 6 log.go:172] (0xc00605a6e0) (0xc001d76320) Stream removed, broadcasting: 1 I0519 21:52:25.936021 6 log.go:172] (0xc00605a6e0) (0xc001600140) Stream removed, broadcasting: 3 I0519 21:52:25.936034 6 log.go:172] (0xc00605a6e0) (0xc001600280) Stream removed, broadcasting: 5 May 19 21:52:25.936: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:52:25.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9965" for this suite. • [SLOW TEST:24.434 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3262,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:52:25.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 21:52:25.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6107' May 19 21:52:26.086: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 19 21:52:26.086: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 19 21:52:28.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6107' May 19 21:52:28.371: INFO: stderr: "" May 19 21:52:28.371: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:52:28.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6107" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":187,"skipped":3274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:52:28.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 19 21:52:28.670: INFO: Waiting up to 5m0s for pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521" in namespace "var-expansion-8939" to be "success or failure" May 19 21:52:28.675: INFO: Pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 5.712833ms May 19 21:52:30.680: INFO: Pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010025009s May 19 21:52:32.830: INFO: Pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521": Phase="Running", Reason="", readiness=true. Elapsed: 4.160272783s May 19 21:52:34.835: INFO: Pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164948403s STEP: Saw pod success May 19 21:52:34.835: INFO: Pod "var-expansion-5a21727e-cc31-4762-a478-889863f7f521" satisfied condition "success or failure" May 19 21:52:34.838: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-5a21727e-cc31-4762-a478-889863f7f521 container dapi-container: STEP: delete the pod May 19 21:52:34.896: INFO: Waiting for pod var-expansion-5a21727e-cc31-4762-a478-889863f7f521 to disappear May 19 21:52:34.903: INFO: Pod var-expansion-5a21727e-cc31-4762-a478-889863f7f521 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:52:34.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8939" for this suite. • [SLOW TEST:6.450 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3310,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:52:34.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 21:52:35.489: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 21:52:37.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521955, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521955, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521955, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725521955, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 21:52:40.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:52:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5333" for this suite. STEP: Destroying namespace "webhook-5333-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.942 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":189,"skipped":3322,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:52:52.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 19 21:52:52.887: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 21:52:52.905: INFO: Waiting for terminating namespaces to be deleted... May 19 21:52:52.907: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 19 21:52:52.911: INFO: e2e-test-httpd-deployment-594dddd44f-z6hlv from kubectl-6107 started at 2020-05-19 21:52:26 +0000 UTC (1 container statuses recorded) May 19 21:52:52.911: INFO: Container e2e-test-httpd-deployment ready: true, restart count 0 May 19 21:52:52.911: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:52:52.911: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:52:52.911: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:52:52.911: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:52:52.911: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 19 21:52:52.916: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:52:52.916: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:52:52.916: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 19 21:52:52.916: INFO: Container kube-bench ready: false, restart count 0 May 19 21:52:52.916: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:52:52.916: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:52:52.916: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 19 21:52:52.916: INFO: Container kube-hunter ready: false, restart count 0 May 19 21:52:52.916: INFO: sample-webhook-deployment-5f65f8c764-kl85l from webhook-5333 started at 2020-05-19 21:52:35 +0000 UTC (1 container statuses recorded) May 19 21:52:52.916: INFO: Container sample-webhook ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-65616adc-9d3c-459a-87ab-82ef4a8a4d8a 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-65616adc-9d3c-459a-87ab-82ef4a8a4d8a off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-65616adc-9d3c-459a-87ab-82ef4a8a4d8a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6801" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.311 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":190,"skipped":3325,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:01.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 19 21:58:01.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8641' May 19 21:58:02.126: INFO: stderr: "" May 19 21:58:02.126: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:58:02.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:02.313: INFO: stderr: "" May 19 21:58:02.313: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-r9s5b " May 19 21:58:02.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:02.403: INFO: stderr: "" May 19 21:58:02.404: INFO: stdout: "" May 19 21:58:02.404: INFO: update-demo-nautilus-5jbhl is created but not running May 19 21:58:07.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:07.493: INFO: stderr: "" May 19 21:58:07.493: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-r9s5b " May 19 21:58:07.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:07.575: INFO: stderr: "" May 19 21:58:07.575: INFO: stdout: "true" May 19 21:58:07.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:07.671: INFO: stderr: "" May 19 21:58:07.671: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:07.671: INFO: validating pod update-demo-nautilus-5jbhl May 19 21:58:07.675: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:07.675: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:07.675: INFO: update-demo-nautilus-5jbhl is verified up and running May 19 21:58:07.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9s5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:07.774: INFO: stderr: "" May 19 21:58:07.774: INFO: stdout: "true" May 19 21:58:07.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9s5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:07.953: INFO: stderr: "" May 19 21:58:07.953: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:07.953: INFO: validating pod update-demo-nautilus-r9s5b May 19 21:58:07.957: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:07.957: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:07.957: INFO: update-demo-nautilus-r9s5b is verified up and running STEP: scaling down the replication controller May 19 21:58:07.959: INFO: scanned /root for discovery docs: May 19 21:58:07.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8641' May 19 21:58:09.084: INFO: stderr: "" May 19 21:58:09.084: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:58:09.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:09.227: INFO: stderr: "" May 19 21:58:09.227: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-r9s5b " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 21:58:14.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:14.318: INFO: stderr: "" May 19 21:58:14.318: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-r9s5b " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 21:58:19.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:19.422: INFO: stderr: "" May 19 21:58:19.422: INFO: stdout: "update-demo-nautilus-5jbhl " May 19 21:58:19.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:19.517: INFO: stderr: "" May 19 21:58:19.517: INFO: stdout: "true" May 19 21:58:19.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:19.617: INFO: stderr: "" May 19 21:58:19.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:19.617: INFO: validating pod update-demo-nautilus-5jbhl May 19 21:58:19.619: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:19.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:19.619: INFO: update-demo-nautilus-5jbhl is verified up and running STEP: scaling up the replication controller May 19 21:58:19.622: INFO: scanned /root for discovery docs: May 19 21:58:19.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8641' May 19 21:58:20.750: INFO: stderr: "" May 19 21:58:20.750: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 21:58:20.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:20.846: INFO: stderr: "" May 19 21:58:20.846: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-lczz7 " May 19 21:58:20.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:20.936: INFO: stderr: "" May 19 21:58:20.936: INFO: stdout: "true" May 19 21:58:20.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:21.043: INFO: stderr: "" May 19 21:58:21.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:21.043: INFO: validating pod update-demo-nautilus-5jbhl May 19 21:58:21.111: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:21.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:21.111: INFO: update-demo-nautilus-5jbhl is verified up and running May 19 21:58:21.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lczz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:21.212: INFO: stderr: "" May 19 21:58:21.212: INFO: stdout: "" May 19 21:58:21.212: INFO: update-demo-nautilus-lczz7 is created but not running May 19 21:58:26.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8641' May 19 21:58:26.313: INFO: stderr: "" May 19 21:58:26.313: INFO: stdout: "update-demo-nautilus-5jbhl update-demo-nautilus-lczz7 " May 19 21:58:26.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:26.408: INFO: stderr: "" May 19 21:58:26.408: INFO: stdout: "true" May 19 21:58:26.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jbhl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:26.503: INFO: stderr: "" May 19 21:58:26.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:26.503: INFO: validating pod update-demo-nautilus-5jbhl May 19 21:58:26.506: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:26.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:26.506: INFO: update-demo-nautilus-5jbhl is verified up and running May 19 21:58:26.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lczz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:26.614: INFO: stderr: "" May 19 21:58:26.614: INFO: stdout: "true" May 19 21:58:26.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lczz7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8641' May 19 21:58:26.715: INFO: stderr: "" May 19 21:58:26.715: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 21:58:26.715: INFO: validating pod update-demo-nautilus-lczz7 May 19 21:58:26.719: INFO: got data: { "image": "nautilus.jpg" } May 19 21:58:26.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 21:58:26.719: INFO: update-demo-nautilus-lczz7 is verified up and running STEP: using delete to clean up resources May 19 21:58:26.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8641' May 19 21:58:26.851: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 21:58:26.851: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 21:58:26.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8641' May 19 21:58:26.955: INFO: stderr: "No resources found in kubectl-8641 namespace.\n" May 19 21:58:26.955: INFO: stdout: "" May 19 21:58:26.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8641 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 21:58:27.056: INFO: stderr: "" May 19 21:58:27.056: INFO: stdout: "update-demo-nautilus-5jbhl\nupdate-demo-nautilus-lczz7\n" May 19 21:58:27.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8641' May 19 21:58:27.657: INFO: stderr: "No resources found in kubectl-8641 namespace.\n" May 19 21:58:27.657: INFO: stdout: "" May 19 21:58:27.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8641 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 21:58:27.744: INFO: stderr: "" May 19 21:58:27.744: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:27.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8641" for this suite. • [SLOW TEST:26.586 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":191,"skipped":3339,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-ab759cb4-1acb-4df8-b115-bfea0be019ba STEP: Creating a pod to test consume configMaps May 19 21:58:27.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33" in namespace "configmap-3591" to be "success or failure" May 19 21:58:28.115: INFO: Pod "pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33": Phase="Pending", Reason="", readiness=false. Elapsed: 254.443841ms May 19 21:58:30.215: INFO: Pod "pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354356656s May 19 21:58:32.219: INFO: Pod "pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.358551005s STEP: Saw pod success May 19 21:58:32.219: INFO: Pod "pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33" satisfied condition "success or failure" May 19 21:58:32.222: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33 container configmap-volume-test: STEP: delete the pod May 19 21:58:32.397: INFO: Waiting for pod pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33 to disappear May 19 21:58:32.412: INFO: Pod pod-configmaps-f85d6ce0-f12e-42f3-b87f-7fe1e3e78b33 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:32.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3591" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3346,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:32.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 21:58:32.575: INFO: Waiting up to 5m0s for pod "pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d" in namespace "emptydir-7562" to be "success or failure" May 19 21:58:32.580: INFO: Pod "pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1034ms May 19 21:58:34.583: INFO: Pod "pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00798058s May 19 21:58:36.588: INFO: Pod "pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01231243s STEP: Saw pod success May 19 21:58:36.588: INFO: Pod "pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d" satisfied condition "success or failure" May 19 21:58:36.591: INFO: Trying to get logs from node jerma-worker pod pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d container test-container: STEP: delete the pod May 19 21:58:36.626: INFO: Waiting for pod pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d to disappear May 19 21:58:36.636: INFO: Pod pod-3f8f4e3e-b8ae-4c4c-acaf-dc09419f5b4d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:36.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7562" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3362,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:36.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 21:58:36.712: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 19 21:58:39.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 create -f -' May 19 21:58:43.187: INFO: stderr: "" May 19 21:58:43.187: INFO: stdout: "e2e-test-crd-publish-openapi-3125-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 19 21:58:43.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 delete e2e-test-crd-publish-openapi-3125-crds test-foo' May 19 21:58:43.287: INFO: stderr: "" May 19 21:58:43.287: INFO: stdout: "e2e-test-crd-publish-openapi-3125-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 19 21:58:43.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 apply -f -' May 19 21:58:43.541: INFO: stderr: "" May 19 21:58:43.541: INFO: stdout: "e2e-test-crd-publish-openapi-3125-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 19 21:58:43.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 delete e2e-test-crd-publish-openapi-3125-crds test-foo' May 19 21:58:43.653: INFO: stderr: "" May 19 21:58:43.653: INFO: stdout: "e2e-test-crd-publish-openapi-3125-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 19 21:58:43.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 create -f -' May 19 21:58:43.881: INFO: rc: 1 May 19 21:58:43.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 apply -f -' May 19 21:58:44.117: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 19 21:58:44.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 create -f -' May 19 21:58:44.350: INFO: rc: 1 May 19 21:58:44.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7420 apply -f -' May 19 21:58:44.603: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 19 21:58:44.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3125-crds' May 19 21:58:44.860: INFO: stderr: "" May 19 21:58:44.860: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3125-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 19 21:58:44.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3125-crds.metadata' May 19 21:58:45.087: INFO: stderr: "" May 19 21:58:45.087: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3125-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 19 21:58:45.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3125-crds.spec' May 19 21:58:45.324: INFO: stderr: "" May 19 21:58:45.324: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3125-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 19 21:58:45.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3125-crds.spec.bars' May 19 21:58:45.551: INFO: stderr: "" May 19 21:58:45.551: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3125-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 19 21:58:45.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3125-crds.spec.bars2' May 19 21:58:45.785: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:47.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7420" for this suite. • [SLOW TEST:11.092 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":194,"skipped":3368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:47.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 19 21:58:47.810: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 21:58:47.836: INFO: Waiting for terminating namespaces to be deleted... May 19 21:58:47.839: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 19 21:58:47.844: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:58:47.844: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:58:47.844: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:58:47.844: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:58:47.844: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 19 21:58:47.849: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:58:47.849: INFO: Container kindnet-cni ready: true, restart count 0 May 19 21:58:47.849: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 19 21:58:47.849: INFO: Container kube-bench ready: false, restart count 0 May 19 21:58:47.849: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 21:58:47.849: INFO: Container kube-proxy ready: true, restart count 0 May 19 21:58:47.849: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 19 21:58:47.849: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 19 21:58:47.934: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 19 21:58:47.934: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 19 21:58:47.934: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 19 21:58:47.934: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 19 21:58:47.934: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 19 21:58:47.940: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc.16108cd808e50652], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1869/filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc.16108cd8969d88a8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc.16108cd8df321c26], Reason = [Created], Message = [Created container filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc] STEP: Considering event: Type = [Normal], Name = [filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc.16108cd8f0cca15d], Reason = [Started], Message = [Started container filler-pod-2056500d-2a26-4f67-8bd1-f0d2d944fcbc] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0a92ece-d597-4452-8537-829440412977.16108cd80899d30c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1869/filler-pod-d0a92ece-d597-4452-8537-829440412977 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0a92ece-d597-4452-8537-829440412977.16108cd8584a7e0f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0a92ece-d597-4452-8537-829440412977.16108cd8c29d049b], Reason = [Created], Message = [Created container filler-pod-d0a92ece-d597-4452-8537-829440412977] STEP: Considering event: Type = [Normal], Name = [filler-pod-d0a92ece-d597-4452-8537-829440412977.16108cd8d9661682], Reason = [Started], Message = [Started container filler-pod-d0a92ece-d597-4452-8537-829440412977] STEP: Considering event: Type = [Warning], Name = [additional-pod.16108cd8fc2836cb], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 21:58:53.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1869" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.408 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":195,"skipped":3395,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 21:58:53.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5355 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5355 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5355 May 19 21:58:53.281: INFO: Found 0 stateful pods, waiting for 1 May 19 21:59:03.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 19 21:59:03.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:59:03.572: INFO: stderr: "I0519 21:59:03.415990 3339 log.go:172] (0xc000a1d970) (0xc000c10280) Create stream\nI0519 21:59:03.416052 3339 log.go:172] (0xc000a1d970) (0xc000c10280) Stream added, broadcasting: 1\nI0519 21:59:03.418726 3339 log.go:172] (0xc000a1d970) Reply frame received for 1\nI0519 21:59:03.418762 3339 log.go:172] (0xc000a1d970) (0xc000c10320) Create stream\nI0519 21:59:03.418771 3339 log.go:172] (0xc000a1d970) (0xc000c10320) Stream added, broadcasting: 3\nI0519 21:59:03.419647 3339 log.go:172] (0xc000a1d970) Reply frame received for 3\nI0519 21:59:03.419690 3339 log.go:172] (0xc000a1d970) (0xc000c103c0) Create stream\nI0519 21:59:03.419701 3339 log.go:172] (0xc000a1d970) (0xc000c103c0) Stream added, broadcasting: 5\nI0519 21:59:03.420629 3339 log.go:172] (0xc000a1d970) Reply frame received for 5\nI0519 21:59:03.507123 3339 log.go:172] (0xc000a1d970) Data frame received for 5\nI0519 21:59:03.507151 3339 log.go:172] (0xc000c103c0) (5) Data frame handling\nI0519 21:59:03.507165 3339 log.go:172] (0xc000c103c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:59:03.566743 3339 log.go:172] (0xc000a1d970) Data frame received for 3\nI0519 21:59:03.566767 3339 log.go:172] (0xc000c10320) (3) Data frame handling\nI0519 21:59:03.566785 3339 log.go:172] (0xc000c10320) (3) Data frame sent\nI0519 21:59:03.566961 3339 log.go:172] (0xc000a1d970) Data frame received for 5\nI0519 21:59:03.566972 3339 log.go:172] (0xc000c103c0) (5) Data frame handling\nI0519 21:59:03.566994 3339 log.go:172] (0xc000a1d970) Data frame received for 3\nI0519 21:59:03.567000 3339 log.go:172] (0xc000c10320) (3) Data frame handling\nI0519 21:59:03.568841 3339 log.go:172] (0xc000a1d970) Data frame received for 1\nI0519 21:59:03.568858 3339 log.go:172] (0xc000c10280) (1) Data frame handling\nI0519 21:59:03.568864 3339 log.go:172] (0xc000c10280) (1) Data frame sent\nI0519 21:59:03.568877 3339 log.go:172] (0xc000a1d970) (0xc000c10280) Stream removed, broadcasting: 1\nI0519 21:59:03.568924 3339 log.go:172] (0xc000a1d970) Go away received\nI0519 21:59:03.569230 3339 log.go:172] (0xc000a1d970) (0xc000c10280) Stream removed, broadcasting: 1\nI0519 21:59:03.569306 3339 log.go:172] (0xc000a1d970) (0xc000c10320) Stream removed, broadcasting: 3\nI0519 21:59:03.569312 3339 log.go:172] (0xc000a1d970) (0xc000c103c0) Stream removed, broadcasting: 5\n" May 19 21:59:03.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:59:03.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:59:03.576: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 21:59:13.581: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 21:59:13.581: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:59:13.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999471s May 19 21:59:14.624: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.971279168s May 19 21:59:15.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966619241s May 19 21:59:16.634: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.961993224s May 19 21:59:17.639: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.956889199s May 19 21:59:18.643: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.95217231s May 19 21:59:19.648: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.947443223s May 19 21:59:20.652: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943192385s May 19 21:59:21.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.938598914s May 19 21:59:22.660: INFO: Verifying statefulset ss doesn't scale past 1 for another 935.279978ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5355 May 19 21:59:23.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:59:23.912: INFO: stderr: "I0519 21:59:23.800999 3359 log.go:172] (0xc000a6d760) (0xc000940640) Create stream\nI0519 21:59:23.801061 3359 log.go:172] (0xc000a6d760) (0xc000940640) Stream added, broadcasting: 1\nI0519 21:59:23.805045 3359 log.go:172] (0xc000a6d760) Reply frame received for 1\nI0519 21:59:23.805354 3359 log.go:172] (0xc000a6d760) (0xc0006b9a40) Create stream\nI0519 21:59:23.805387 3359 log.go:172] (0xc000a6d760) (0xc0006b9a40) Stream added, broadcasting: 3\nI0519 21:59:23.806570 3359 log.go:172] (0xc000a6d760) Reply frame received for 3\nI0519 21:59:23.806630 3359 log.go:172] (0xc000a6d760) (0xc000664640) Create stream\nI0519 21:59:23.806653 3359 log.go:172] (0xc000a6d760) (0xc000664640) Stream added, broadcasting: 5\nI0519 21:59:23.807960 3359 log.go:172] (0xc000a6d760) Reply frame received for 5\nI0519 21:59:23.904914 3359 log.go:172] (0xc000a6d760) Data frame received for 3\nI0519 21:59:23.904954 3359 log.go:172] (0xc0006b9a40) (3) Data frame handling\nI0519 21:59:23.904977 3359 log.go:172] (0xc0006b9a40) (3) Data frame sent\nI0519 21:59:23.904989 3359 log.go:172] (0xc000a6d760) Data frame received for 3\nI0519 21:59:23.905000 3359 log.go:172] (0xc0006b9a40) (3) Data frame handling\nI0519 21:59:23.905395 3359 log.go:172] (0xc000a6d760) Data frame received for 5\nI0519 21:59:23.905505 3359 log.go:172] (0xc000664640) (5) Data frame handling\nI0519 21:59:23.905615 3359 log.go:172] (0xc000664640) (5) Data frame sent\nI0519 21:59:23.905653 3359 log.go:172] (0xc000a6d760) Data frame received for 5\nI0519 21:59:23.905676 3359 log.go:172] (0xc000664640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:59:23.907412 3359 log.go:172] (0xc000a6d760) Data frame received for 1\nI0519 21:59:23.907447 3359 log.go:172] (0xc000940640) (1) Data frame handling\nI0519 21:59:23.907480 3359 log.go:172] (0xc000940640) (1) Data frame sent\nI0519 21:59:23.907502 3359 log.go:172] (0xc000a6d760) (0xc000940640) Stream removed, broadcasting: 1\nI0519 21:59:23.907539 3359 log.go:172] (0xc000a6d760) Go away received\nI0519 21:59:23.907961 3359 log.go:172] (0xc000a6d760) (0xc000940640) Stream removed, broadcasting: 1\nI0519 21:59:23.907979 3359 log.go:172] (0xc000a6d760) (0xc0006b9a40) Stream removed, broadcasting: 3\nI0519 21:59:23.907988 3359 log.go:172] (0xc000a6d760) (0xc000664640) Stream removed, broadcasting: 5\n" May 19 21:59:23.913: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:59:23.913: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:59:23.916: INFO: Found 1 stateful pods, waiting for 3 May 19 21:59:33.920: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 21:59:33.920: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 21:59:33.920: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 19 21:59:33.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:59:34.163: INFO: stderr: "I0519 21:59:34.052435 3379 log.go:172] (0xc000aacf20) (0xc000a98320) Create stream\nI0519 21:59:34.052488 3379 log.go:172] (0xc000aacf20) (0xc000a98320) Stream added, broadcasting: 1\nI0519 21:59:34.054698 3379 log.go:172] (0xc000aacf20) Reply frame received for 1\nI0519 21:59:34.054739 3379 log.go:172] (0xc000aacf20) (0xc000a583c0) Create stream\nI0519 21:59:34.054751 3379 log.go:172] (0xc000aacf20) (0xc000a583c0) Stream added, broadcasting: 3\nI0519 21:59:34.055667 3379 log.go:172] (0xc000aacf20) Reply frame received for 3\nI0519 21:59:34.055698 3379 log.go:172] (0xc000aacf20) (0xc000a365a0) Create stream\nI0519 21:59:34.055706 3379 log.go:172] (0xc000aacf20) (0xc000a365a0) Stream added, broadcasting: 5\nI0519 21:59:34.056464 3379 log.go:172] (0xc000aacf20) Reply frame received for 5\nI0519 21:59:34.156500 3379 log.go:172] (0xc000aacf20) Data frame received for 3\nI0519 21:59:34.156536 3379 log.go:172] (0xc000a583c0) (3) Data frame handling\nI0519 21:59:34.156545 3379 log.go:172] (0xc000a583c0) (3) Data frame sent\nI0519 21:59:34.156552 3379 log.go:172] (0xc000aacf20) Data frame received for 3\nI0519 21:59:34.156556 3379 log.go:172] (0xc000a583c0) (3) Data frame handling\nI0519 21:59:34.156566 3379 log.go:172] (0xc000aacf20) Data frame received for 5\nI0519 21:59:34.156571 3379 log.go:172] (0xc000a365a0) (5) Data frame handling\nI0519 21:59:34.156580 3379 log.go:172] (0xc000a365a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:59:34.156668 3379 log.go:172] (0xc000aacf20) Data frame received for 5\nI0519 21:59:34.156680 3379 log.go:172] (0xc000a365a0) (5) Data frame handling\nI0519 21:59:34.157936 3379 log.go:172] (0xc000aacf20) Data frame received for 1\nI0519 21:59:34.157967 3379 log.go:172] (0xc000a98320) (1) Data frame handling\nI0519 21:59:34.157982 3379 log.go:172] (0xc000a98320) (1) Data frame sent\nI0519 21:59:34.157990 3379 log.go:172] (0xc000aacf20) (0xc000a98320) Stream removed, broadcasting: 1\nI0519 21:59:34.158002 3379 log.go:172] (0xc000aacf20) Go away received\nI0519 21:59:34.158341 3379 log.go:172] (0xc000aacf20) (0xc000a98320) Stream removed, broadcasting: 1\nI0519 21:59:34.158371 3379 log.go:172] (0xc000aacf20) (0xc000a583c0) Stream removed, broadcasting: 3\nI0519 21:59:34.158380 3379 log.go:172] (0xc000aacf20) (0xc000a365a0) Stream removed, broadcasting: 5\n" May 19 21:59:34.163: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:59:34.163: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:59:34.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:59:34.454: INFO: stderr: "I0519 21:59:34.331250 3400 log.go:172] (0xc000a0f810) (0xc0009e2780) Create stream\nI0519 21:59:34.331341 3400 log.go:172] (0xc000a0f810) (0xc0009e2780) Stream added, broadcasting: 1\nI0519 21:59:34.338290 3400 log.go:172] (0xc000a0f810) Reply frame received for 1\nI0519 21:59:34.338360 3400 log.go:172] (0xc000a0f810) (0xc0005b4640) Create stream\nI0519 21:59:34.338376 3400 log.go:172] (0xc000a0f810) (0xc0005b4640) Stream added, broadcasting: 3\nI0519 21:59:34.339287 3400 log.go:172] (0xc000a0f810) Reply frame received for 3\nI0519 21:59:34.339339 3400 log.go:172] (0xc000a0f810) (0xc0007a72c0) Create stream\nI0519 21:59:34.339358 3400 log.go:172] (0xc000a0f810) (0xc0007a72c0) Stream added, broadcasting: 5\nI0519 21:59:34.340277 3400 log.go:172] (0xc000a0f810) Reply frame received for 5\nI0519 21:59:34.404944 3400 log.go:172] (0xc000a0f810) Data frame received for 5\nI0519 21:59:34.404979 3400 log.go:172] (0xc0007a72c0) (5) Data frame handling\nI0519 21:59:34.405005 3400 log.go:172] (0xc0007a72c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:59:34.445828 3400 log.go:172] (0xc000a0f810) Data frame received for 5\nI0519 21:59:34.445872 3400 log.go:172] (0xc0007a72c0) (5) Data frame handling\nI0519 21:59:34.445900 3400 log.go:172] (0xc000a0f810) Data frame received for 3\nI0519 21:59:34.445918 3400 log.go:172] (0xc0005b4640) (3) Data frame handling\nI0519 21:59:34.445937 3400 log.go:172] (0xc0005b4640) (3) Data frame sent\nI0519 21:59:34.445961 3400 log.go:172] (0xc000a0f810) Data frame received for 3\nI0519 21:59:34.445982 3400 log.go:172] (0xc0005b4640) (3) Data frame handling\nI0519 21:59:34.448325 3400 log.go:172] (0xc000a0f810) Data frame received for 1\nI0519 21:59:34.448354 3400 log.go:172] (0xc0009e2780) (1) Data frame handling\nI0519 21:59:34.448370 3400 log.go:172] (0xc0009e2780) (1) Data frame sent\nI0519 21:59:34.448388 3400 log.go:172] (0xc000a0f810) (0xc0009e2780) Stream removed, broadcasting: 1\nI0519 21:59:34.448406 3400 log.go:172] (0xc000a0f810) Go away received\nI0519 21:59:34.448838 3400 log.go:172] (0xc000a0f810) (0xc0009e2780) Stream removed, broadcasting: 1\nI0519 21:59:34.448858 3400 log.go:172] (0xc000a0f810) (0xc0005b4640) Stream removed, broadcasting: 3\nI0519 21:59:34.448876 3400 log.go:172] (0xc000a0f810) (0xc0007a72c0) Stream removed, broadcasting: 5\n" May 19 21:59:34.454: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:59:34.454: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:59:34.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 21:59:34.680: INFO: stderr: "I0519 21:59:34.580250 3420 log.go:172] (0xc0003c0000) (0xc000900000) Create stream\nI0519 21:59:34.580318 3420 log.go:172] (0xc0003c0000) (0xc000900000) Stream added, broadcasting: 1\nI0519 21:59:34.583051 3420 log.go:172] (0xc0003c0000) Reply frame received for 1\nI0519 21:59:34.583091 3420 log.go:172] (0xc0003c0000) (0xc0009000a0) Create stream\nI0519 21:59:34.583104 3420 log.go:172] (0xc0003c0000) (0xc0009000a0) Stream added, broadcasting: 3\nI0519 21:59:34.584092 3420 log.go:172] (0xc0003c0000) Reply frame received for 3\nI0519 21:59:34.584141 3420 log.go:172] (0xc0003c0000) (0xc000711540) Create stream\nI0519 21:59:34.584166 3420 log.go:172] (0xc0003c0000) (0xc000711540) Stream added, broadcasting: 5\nI0519 21:59:34.585053 3420 log.go:172] (0xc0003c0000) Reply frame received for 5\nI0519 21:59:34.636755 3420 log.go:172] (0xc0003c0000) Data frame received for 5\nI0519 21:59:34.636792 3420 log.go:172] (0xc000711540) (5) Data frame handling\nI0519 21:59:34.636828 3420 log.go:172] (0xc000711540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 21:59:34.673572 3420 log.go:172] (0xc0003c0000) Data frame received for 3\nI0519 21:59:34.673611 3420 log.go:172] (0xc0009000a0) (3) Data frame handling\nI0519 21:59:34.673633 3420 log.go:172] (0xc0009000a0) (3) Data frame sent\nI0519 21:59:34.673966 3420 log.go:172] (0xc0003c0000) Data frame received for 3\nI0519 21:59:34.674076 3420 log.go:172] (0xc0009000a0) (3) Data frame handling\nI0519 21:59:34.674104 3420 log.go:172] (0xc0003c0000) Data frame received for 5\nI0519 21:59:34.674113 3420 log.go:172] (0xc000711540) (5) Data frame handling\nI0519 21:59:34.675686 3420 log.go:172] (0xc0003c0000) Data frame received for 1\nI0519 21:59:34.675706 3420 log.go:172] (0xc000900000) (1) Data frame handling\nI0519 21:59:34.675713 3420 log.go:172] (0xc000900000) (1) Data frame sent\nI0519 21:59:34.675721 3420 log.go:172] (0xc0003c0000) (0xc000900000) Stream removed, broadcasting: 1\nI0519 21:59:34.675805 3420 log.go:172] (0xc0003c0000) Go away received\nI0519 21:59:34.675976 3420 log.go:172] (0xc0003c0000) (0xc000900000) Stream removed, broadcasting: 1\nI0519 21:59:34.675998 3420 log.go:172] (0xc0003c0000) (0xc0009000a0) Stream removed, broadcasting: 3\nI0519 21:59:34.676013 3420 log.go:172] (0xc0003c0000) (0xc000711540) Stream removed, broadcasting: 5\n" May 19 21:59:34.680: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 21:59:34.680: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 21:59:34.680: INFO: Waiting for statefulset status.replicas updated to 0 May 19 21:59:34.683: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 19 21:59:44.694: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 21:59:44.695: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 21:59:44.695: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 21:59:44.728: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999379s May 19 21:59:45.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972200336s May 19 21:59:46.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.967250507s May 19 21:59:47.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962142691s May 19 21:59:48.749: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.957087685s May 19 21:59:49.754: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.951734236s May 19 21:59:50.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.946456435s May 19 21:59:51.764: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941153279s May 19 21:59:52.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.935964806s May 19 21:59:53.775: INFO: Verifying statefulset ss doesn't scale past 3 for another 931.091093ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5355 May 19 21:59:54.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:59:54.990: INFO: stderr: "I0519 21:59:54.905987 3440 log.go:172] (0xc000b704d0) (0xc000b440a0) Create stream\nI0519 21:59:54.906038 3440 log.go:172] (0xc000b704d0) (0xc000b440a0) Stream added, broadcasting: 1\nI0519 21:59:54.908756 3440 log.go:172] (0xc000b704d0) Reply frame received for 1\nI0519 21:59:54.908795 3440 log.go:172] (0xc000b704d0) (0xc000b000a0) Create stream\nI0519 21:59:54.908806 3440 log.go:172] (0xc000b704d0) (0xc000b000a0) Stream added, broadcasting: 3\nI0519 21:59:54.909847 3440 log.go:172] (0xc000b704d0) Reply frame received for 3\nI0519 21:59:54.909868 3440 log.go:172] (0xc000b704d0) (0xc0009d00a0) Create stream\nI0519 21:59:54.909874 3440 log.go:172] (0xc000b704d0) (0xc0009d00a0) Stream added, broadcasting: 5\nI0519 21:59:54.910702 3440 log.go:172] (0xc000b704d0) Reply frame received for 5\nI0519 21:59:54.985477 3440 log.go:172] (0xc000b704d0) Data frame received for 3\nI0519 21:59:54.985513 3440 log.go:172] (0xc000b000a0) (3) Data frame handling\nI0519 21:59:54.985527 3440 log.go:172] (0xc000b000a0) (3) Data frame sent\nI0519 21:59:54.985536 3440 log.go:172] (0xc000b704d0) Data frame received for 3\nI0519 21:59:54.985544 3440 log.go:172] (0xc000b000a0) (3) Data frame handling\nI0519 21:59:54.985573 3440 log.go:172] (0xc000b704d0) Data frame received for 5\nI0519 21:59:54.985588 3440 log.go:172] (0xc0009d00a0) (5) Data frame handling\nI0519 21:59:54.985602 3440 log.go:172] (0xc0009d00a0) (5) Data frame sent\nI0519 21:59:54.985615 3440 log.go:172] (0xc000b704d0) Data frame received for 5\nI0519 21:59:54.985625 3440 log.go:172] (0xc0009d00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:59:54.986485 3440 log.go:172] (0xc000b704d0) Data frame received for 1\nI0519 21:59:54.986498 3440 log.go:172] (0xc000b440a0) (1) Data frame handling\nI0519 21:59:54.986516 3440 log.go:172] (0xc000b440a0) (1) Data frame sent\nI0519 21:59:54.986529 3440 log.go:172] (0xc000b704d0) (0xc000b440a0) Stream removed, broadcasting: 1\nI0519 21:59:54.986608 3440 log.go:172] (0xc000b704d0) Go away received\nI0519 21:59:54.987058 3440 log.go:172] (0xc000b704d0) (0xc000b440a0) Stream removed, broadcasting: 1\nI0519 21:59:54.987076 3440 log.go:172] (0xc000b704d0) (0xc000b000a0) Stream removed, broadcasting: 3\nI0519 21:59:54.987084 3440 log.go:172] (0xc000b704d0) (0xc0009d00a0) Stream removed, broadcasting: 5\n" May 19 21:59:54.990: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:59:54.990: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:59:54.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:59:55.184: INFO: stderr: "I0519 21:59:55.115811 3460 log.go:172] (0xc0000f4370) (0xc00074b540) Create stream\nI0519 21:59:55.115889 3460 log.go:172] (0xc0000f4370) (0xc00074b540) Stream added, broadcasting: 1\nI0519 21:59:55.118614 3460 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0519 21:59:55.118686 3460 log.go:172] (0xc0000f4370) (0xc00099c000) Create stream\nI0519 21:59:55.118704 3460 log.go:172] (0xc0000f4370) (0xc00099c000) Stream added, broadcasting: 3\nI0519 21:59:55.119748 3460 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0519 21:59:55.119775 3460 log.go:172] (0xc0000f4370) (0xc0009c6000) Create stream\nI0519 21:59:55.119781 3460 log.go:172] (0xc0000f4370) (0xc0009c6000) Stream added, broadcasting: 5\nI0519 21:59:55.120670 3460 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0519 21:59:55.178338 3460 log.go:172] (0xc0000f4370) Data frame received for 3\nI0519 21:59:55.178368 3460 log.go:172] (0xc00099c000) (3) Data frame handling\nI0519 21:59:55.178377 3460 log.go:172] (0xc00099c000) (3) Data frame sent\nI0519 21:59:55.178383 3460 log.go:172] (0xc0000f4370) Data frame received for 3\nI0519 21:59:55.178389 3460 log.go:172] (0xc00099c000) (3) Data frame handling\nI0519 21:59:55.178413 3460 log.go:172] (0xc0000f4370) Data frame received for 5\nI0519 21:59:55.178419 3460 log.go:172] (0xc0009c6000) (5) Data frame handling\nI0519 21:59:55.178426 3460 log.go:172] (0xc0009c6000) (5) Data frame sent\nI0519 21:59:55.178431 3460 log.go:172] (0xc0000f4370) Data frame received for 5\nI0519 21:59:55.178436 3460 log.go:172] (0xc0009c6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:59:55.180106 3460 log.go:172] (0xc0000f4370) Data frame received for 1\nI0519 21:59:55.180140 3460 log.go:172] (0xc00074b540) (1) Data frame handling\nI0519 21:59:55.180167 3460 log.go:172] (0xc00074b540) (1) Data frame sent\nI0519 21:59:55.180192 3460 log.go:172] (0xc0000f4370) (0xc00074b540) Stream removed, broadcasting: 1\nI0519 21:59:55.180288 3460 log.go:172] (0xc0000f4370) Go away received\nI0519 21:59:55.180655 3460 log.go:172] (0xc0000f4370) (0xc00074b540) Stream removed, broadcasting: 1\nI0519 21:59:55.180674 3460 log.go:172] (0xc0000f4370) (0xc00099c000) Stream removed, broadcasting: 3\nI0519 21:59:55.180685 3460 log.go:172] (0xc0000f4370) (0xc0009c6000) Stream removed, broadcasting: 5\n" May 19 21:59:55.185: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:59:55.185: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:59:55.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5355 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 21:59:55.406: INFO: stderr: "I0519 21:59:55.320363 3479 log.go:172] (0xc0007c0bb0) (0xc0003da320) Create stream\nI0519 21:59:55.320425 3479 log.go:172] (0xc0007c0bb0) (0xc0003da320) Stream added, broadcasting: 1\nI0519 21:59:55.323691 3479 log.go:172] (0xc0007c0bb0) Reply frame received for 1\nI0519 21:59:55.323733 3479 log.go:172] (0xc0007c0bb0) (0xc000850000) Create stream\nI0519 21:59:55.323744 3479 log.go:172] (0xc0007c0bb0) (0xc000850000) Stream added, broadcasting: 3\nI0519 21:59:55.324629 3479 log.go:172] (0xc0007c0bb0) Reply frame received for 3\nI0519 21:59:55.324658 3479 log.go:172] (0xc0007c0bb0) (0xc0008500a0) Create stream\nI0519 21:59:55.324669 3479 log.go:172] (0xc0007c0bb0) (0xc0008500a0) Stream added, broadcasting: 5\nI0519 21:59:55.325937 3479 log.go:172] (0xc0007c0bb0) Reply frame received for 5\nI0519 21:59:55.397825 3479 log.go:172] (0xc0007c0bb0) Data frame received for 5\nI0519 21:59:55.397852 3479 log.go:172] (0xc0008500a0) (5) Data frame handling\nI0519 21:59:55.397870 3479 log.go:172] (0xc0008500a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 21:59:55.398305 3479 log.go:172] (0xc0007c0bb0) Data frame received for 3\nI0519 21:59:55.398325 3479 log.go:172] (0xc000850000) (3) Data frame handling\nI0519 21:59:55.398339 3479 log.go:172] (0xc000850000) (3) Data frame sent\nI0519 21:59:55.398389 3479 log.go:172] (0xc0007c0bb0) Data frame received for 5\nI0519 21:59:55.398401 3479 log.go:172] (0xc0008500a0) (5) Data frame handling\nI0519 21:59:55.398631 3479 log.go:172] (0xc0007c0bb0) Data frame received for 3\nI0519 21:59:55.398648 3479 log.go:172] (0xc000850000) (3) Data frame handling\nI0519 21:59:55.400560 3479 log.go:172] (0xc0007c0bb0) Data frame received for 1\nI0519 21:59:55.400577 3479 log.go:172] (0xc0003da320) (1) Data frame handling\nI0519 21:59:55.400586 3479 log.go:172] (0xc0003da320) (1) Data frame sent\nI0519 21:59:55.400597 3479 log.go:172] (0xc0007c0bb0) (0xc0003da320) Stream removed, broadcasting: 1\nI0519 21:59:55.400607 3479 log.go:172] (0xc0007c0bb0) Go away received\nI0519 21:59:55.400957 3479 log.go:172] (0xc0007c0bb0) (0xc0003da320) Stream removed, broadcasting: 1\nI0519 21:59:55.400976 3479 log.go:172] (0xc0007c0bb0) (0xc000850000) Stream removed, broadcasting: 3\nI0519 21:59:55.400985 3479 log.go:172] (0xc0007c0bb0) (0xc0008500a0) Stream removed, broadcasting: 5\n" May 19 21:59:55.406: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 21:59:55.406: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 21:59:55.406: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 19 22:00:15.422: INFO: Deleting all statefulset in ns statefulset-5355 May 19 22:00:15.426: INFO: Scaling statefulset ss to 0 May 19 22:00:15.433: INFO: Waiting for statefulset status.replicas updated to 0 May 19 22:00:15.436: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:00:15.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5355" for this suite. • [SLOW TEST:82.310 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":196,"skipped":3399,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:00:15.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:00:15.530: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:00:21.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2166" for this suite. • [SLOW TEST:5.570 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":197,"skipped":3399,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:00:21.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cfee92bf-6db8-489c-ab13-3374a6e0141c STEP: Creating a pod to test consume secrets May 19 22:00:21.227: INFO: Waiting up to 5m0s for pod "pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d" in namespace "secrets-2629" to be "success or failure" May 19 22:00:21.280: INFO: Pod "pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.226634ms May 19 22:00:23.284: INFO: Pod "pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057195855s May 19 22:00:25.288: INFO: Pod "pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061343409s STEP: Saw pod success May 19 22:00:25.288: INFO: Pod "pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d" satisfied condition "success or failure" May 19 22:00:25.291: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d container secret-volume-test: STEP: delete the pod May 19 22:00:25.385: INFO: Waiting for pod pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d to disappear May 19 22:00:25.394: INFO: Pod pod-secrets-d8065379-6fb5-4256-91d8-678141670d3d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:00:25.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2629" for this suite. STEP: Destroying namespace "secret-namespace-5424" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3406,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:00:25.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:00:25.563: INFO: Creating deployment "webserver-deployment" May 19 22:00:25.567: INFO: Waiting for observed generation 1 May 19 22:00:27.577: INFO: Waiting for all required pods to come up May 19 22:00:27.580: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 19 22:00:37.587: INFO: Waiting for deployment "webserver-deployment" to complete May 19 22:00:37.591: INFO: Updating deployment "webserver-deployment" with a non-existent image May 19 22:00:37.597: INFO: Updating deployment webserver-deployment May 19 22:00:37.597: INFO: Waiting for observed generation 2 May 19 22:00:39.723: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 19 22:00:39.726: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 19 22:00:39.729: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 19 22:00:39.738: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 19 22:00:39.738: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 19 22:00:39.740: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 19 22:00:39.744: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 19 22:00:39.744: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 19 22:00:39.748: INFO: Updating deployment webserver-deployment May 19 22:00:39.748: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 19 22:00:39.883: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 19 22:00:40.011: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 19 22:00:40.265: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5499 /apis/apps/v1/namespaces/deployment-5499/deployments/webserver-deployment 89406889-5440-4568-b22a-257590933c0e 17542815 3 2020-05-19 22:00:25 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039b5b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-19 22:00:38 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-19 22:00:39 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 19 22:00:40.379: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5499 /apis/apps/v1/namespaces/deployment-5499/replicasets/webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 17542869 3 2020-05-19 22:00:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 89406889-5440-4568-b22a-257590933c0e 0xc0055daf27 0xc0055daf28}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055daf98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 22:00:40.379: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 19 22:00:40.379: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5499 /apis/apps/v1/namespaces/deployment-5499/replicasets/webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 17542866 3 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 89406889-5440-4568-b22a-257590933c0e 0xc0055dae57 0xc0055dae58}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055daec8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 19 22:00:40.522: INFO: Pod "webserver-deployment-595b5b9587-6tknz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6tknz webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-6tknz ae9c51e4-965b-44b0-95dd-e4529130a21c 17542848 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dac1a7 0xc001dac1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.522: INFO: Pod "webserver-deployment-595b5b9587-7phz6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7phz6 webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-7phz6 e9be8d91-20b8-4fc2-91a0-c913b1f18337 17542718 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dac327 0xc001dac328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.75,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a203dbc1fd83bc69b94234b0be9caa6c7f6e734134457d817a5d8ce35e00af37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.522: INFO: Pod "webserver-deployment-595b5b9587-8fnn8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8fnn8 webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-8fnn8 6cbe367e-0b82-4695-be41-d9d8a456c0a1 17542684 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dac4b7 0xc001dac4b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.74,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2793e27b8140b6bb714a9134081eb74501836f58ec6bf4397e0fa8a88c1047da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.522: INFO: Pod "webserver-deployment-595b5b9587-9hl2d" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9hl2d webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-9hl2d 4fc647be-bd65-41af-b357-770e5b182bca 17542678 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dac647 0xc001dac648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.98,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://87e783f5fac21b44be641d7ae07296cc6c4906d8735e60c11afd81d60d90e4b4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-c7dqx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c7dqx webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-c7dqx bbe48437-435d-480a-9503-d068d2eb5ab2 17542855 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dac8a7 0xc001dac8a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-f2jzj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-f2jzj webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-f2jzj 46f65f42-5882-41c6-959e-3a9817db4339 17542851 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001daca17 0xc001daca18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-k7x22" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k7x22 webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-k7x22 b0d651b9-5240-4f7a-a3a8-dd9dc776b9c7 17542852 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dacba7 0xc001dacba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-kjfgv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kjfgv webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-kjfgv ef92f30f-1553-4673-933f-e1099667991f 17542710 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dacd17 0xc001dacd18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.99,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e5659a88011ee71a06a8a7791763d175103a7d3f550d9c0e37b2c1a3ae6a9272,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-lv8xp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lv8xp webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-lv8xp 3a96498e-f9bf-4b4f-a54d-803b73e1540b 17542837 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dacef7 0xc001dacef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-m8q4d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m8q4d webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-m8q4d b5aa4ae1-bcfe-48a5-bce9-aac0083b142f 17542839 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dad077 0xc001dad078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.523: INFO: Pod "webserver-deployment-595b5b9587-mpznc" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mpznc webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-mpznc 89649e8a-ce74-42ba-b257-1be0ace5054c 17542841 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dad427 0xc001dad428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-p8xbw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p8xbw webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-p8xbw 0aa870c1-88ff-48ce-90e2-65f8e5c17424 17542731 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dad6c7 0xc001dad6c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.77,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6d0c40fc68b9ef6d070683233d4f790180b7e3e0ca46441f013510b23059af89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-pffsm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pffsm webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-pffsm f98b4063-a8d2-4f7a-a472-8accd8a914a8 17542834 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dadb47 0xc001dadb48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-19 22:00:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-qkf5x" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qkf5x webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-qkf5x 7dee501f-326f-4e3d-b9a4-c85d9c38af3a 17542716 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc001dade97 0xc001dade98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.100,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b410d7d286977004f4bf4d18c125250c88fffe2c59a8a5a532e6e974dd692fba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-qvj2r" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qvj2r webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-qvj2r e758cbc5-b976-4b49-937e-d6a6664f21e1 17542875 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f00f7 0xc0025f00f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-19 22:00:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-svzl8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-svzl8 webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-svzl8 5d290b73-da5d-42a6-b718-897968f752b0 17542849 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f02c7 0xc0025f02c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.524: INFO: Pod "webserver-deployment-595b5b9587-vsvwb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vsvwb webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-vsvwb e77a94dd-337d-47be-a08e-4e3a5b74e888 17542840 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f03f7 0xc0025f03f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-595b5b9587-wr6fh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wr6fh webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-wr6fh 69db616b-ef7b-4602-8707-d3bdaabdc72d 17542874 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f0547 0xc0025f0548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-19 22:00:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-595b5b9587-ws2bp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ws2bp webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-ws2bp b3b0b2b8-1ff7-40f3-8823-0966ca87e76b 17542674 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f0737 0xc0025f0738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.97,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b8efe40ee5de8715b9077de4da607aed30c3d536acdd22216e54480cd3a2dc20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-595b5b9587-zsgz2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zsgz2 webserver-deployment-595b5b9587- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-595b5b9587-zsgz2 8483981d-78ee-460c-b711-3b415d0cd158 17542723 0 2020-05-19 22:00:25 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2ca81871-96ad-4583-a973-0bd9e47c3430 0xc0025f08f7 0xc0025f08f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.78,StartTime:2020-05-19 22:00:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:00:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c0e54e930dc651a584d2cf8d8c3be33d06dc77d807c95ab40cb77afebe95de3e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-c7997dcc8-4459x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4459x webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-4459x f75df470-c8bf-4e18-b567-8a03f9a189b2 17542820 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f0a87 0xc0025f0a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-c7997dcc8-5896v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5896v webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-5896v 40af5b2e-ea0a-463b-8a0d-7cfadf1e808b 17542789 0 2020-05-19 22:00:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f0c17 0xc0025f0c18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-19 22:00:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-c7997dcc8-b7k2b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7k2b webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-b7k2b ca6b82ed-7383-4b1c-8cd4-6b2c15ab436c 17542847 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f0dc7 0xc0025f0dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.525: INFO: Pod "webserver-deployment-c7997dcc8-fpjz8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fpjz8 webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-fpjz8 24050148-8b1e-49d8-9e4a-cf8e3a9122e9 17542842 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f0f27 0xc0025f0f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-g6nr2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g6nr2 webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-g6nr2 aefdf042-097e-4eba-82a7-c5dd9b3a78d5 17542793 0 2020-05-19 22:00:38 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f10a7 0xc0025f10a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-19 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-hk452" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hk452 webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-hk452 49039822-cdd9-4943-b471-4ee50e73c4f8 17542828 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1237 0xc0025f1238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-kvdzp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kvdzp webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-kvdzp fb6a63f7-65a7-4e67-9fd2-d0546f11b43d 17542863 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1387 0xc0025f1388}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-mwsj9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mwsj9 webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-mwsj9 5558b8ab-96ce-4b67-b959-e089cf5c85cd 17542791 0 2020-05-19 22:00:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f14c7 0xc0025f14c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-19 22:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-ph9dw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ph9dw webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-ph9dw 7f7d24d4-e235-4223-9404-36d028d2aec5 17542845 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f16f7 0xc0025f16f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-pznxx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pznxx webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-pznxx 6b156fca-986f-4e30-ab5d-011528d36eca 17542764 0 2020-05-19 22:00:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1847 0xc0025f1848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-19 22:00:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-qkp8h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qkp8h webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-qkp8h 2bff40c0-ee2a-49ce-8245-cd30b287a51a 17542766 0 2020-05-19 22:00:37 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1a17 0xc0025f1a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-19 22:00:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-s5qhf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s5qhf webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-s5qhf ba66e97e-9db2-4358-9195-00862fddfebc 17542846 0 2020-05-19 22:00:40 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1bc7 0xc0025f1bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 22:00:40.526: INFO: Pod "webserver-deployment-c7997dcc8-zplpw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zplpw webserver-deployment-c7997dcc8- deployment-5499 /api/v1/namespaces/deployment-5499/pods/webserver-deployment-c7997dcc8-zplpw a534a5c2-8f94-4ae6-bc41-ea32566eab4b 17542833 0 2020-05-19 22:00:39 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 26e5eaa8-9826-4d6b-b842-200c538b035f 0xc0025f1d47 0xc0025f1d48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jrvj6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jrvj6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jrvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:00:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:00:40.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5499" for this suite. • [SLOW TEST:15.329 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":199,"skipped":3424,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:00:40.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:00:59.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4145" for this suite. • [SLOW TEST:19.060 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3441,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:00:59.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 19 22:01:00.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9634 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 19 22:01:08.103: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0519 22:01:07.610098 3496 log.go:172] (0xc000a3e9a0) (0xc000a66280) Create stream\nI0519 22:01:07.610144 3496 log.go:172] (0xc000a3e9a0) (0xc000a66280) Stream added, broadcasting: 1\nI0519 22:01:07.612169 3496 log.go:172] (0xc000a3e9a0) Reply frame received for 1\nI0519 22:01:07.612197 3496 log.go:172] (0xc000a3e9a0) (0xc000a66320) Create stream\nI0519 22:01:07.612205 3496 log.go:172] (0xc000a3e9a0) (0xc000a66320) Stream added, broadcasting: 3\nI0519 22:01:07.612812 3496 log.go:172] (0xc000a3e9a0) Reply frame received for 3\nI0519 22:01:07.612843 3496 log.go:172] (0xc000a3e9a0) (0xc000782000) Create stream\nI0519 22:01:07.612853 3496 log.go:172] (0xc000a3e9a0) (0xc000782000) Stream added, broadcasting: 5\nI0519 22:01:07.613843 3496 log.go:172] (0xc000a3e9a0) Reply frame received for 5\nI0519 22:01:07.613884 3496 log.go:172] (0xc000a3e9a0) (0xc0007b6000) Create stream\nI0519 22:01:07.613896 3496 log.go:172] (0xc000a3e9a0) (0xc0007b6000) Stream added, broadcasting: 7\nI0519 22:01:07.615225 3496 log.go:172] (0xc000a3e9a0) Reply frame received for 7\nI0519 22:01:07.615340 3496 log.go:172] (0xc000a66320) (3) Writing data frame\nI0519 22:01:07.615475 3496 log.go:172] (0xc000a66320) (3) Writing data frame\nI0519 22:01:07.616205 3496 log.go:172] (0xc000a3e9a0) Data frame received for 5\nI0519 22:01:07.616232 3496 log.go:172] (0xc000782000) (5) Data frame handling\nI0519 22:01:07.616247 3496 log.go:172] (0xc000782000) (5) Data frame sent\nI0519 22:01:07.616725 3496 log.go:172] (0xc000a3e9a0) Data frame received for 5\nI0519 22:01:07.616764 3496 log.go:172] (0xc000782000) (5) Data frame handling\nI0519 22:01:07.616786 3496 log.go:172] (0xc000782000) (5) Data frame sent\nI0519 22:01:07.650580 3496 log.go:172] (0xc000a3e9a0) Data frame received for 5\nI0519 22:01:07.650618 3496 log.go:172] (0xc000782000) (5) Data frame handling\nI0519 22:01:07.650685 3496 log.go:172] (0xc000a3e9a0) Data frame received for 7\nI0519 22:01:07.650701 3496 log.go:172] (0xc0007b6000) (7) Data frame handling\nI0519 22:01:07.651631 3496 log.go:172] (0xc000a3e9a0) (0xc000a66320) Stream removed, broadcasting: 3\nI0519 22:01:07.651662 3496 log.go:172] (0xc000a3e9a0) Data frame received for 1\nI0519 22:01:07.651674 3496 log.go:172] (0xc000a66280) (1) Data frame handling\nI0519 22:01:07.651695 3496 log.go:172] (0xc000a66280) (1) Data frame sent\nI0519 22:01:07.651710 3496 log.go:172] (0xc000a3e9a0) (0xc000a66280) Stream removed, broadcasting: 1\nI0519 22:01:07.652063 3496 log.go:172] (0xc000a3e9a0) Go away received\nI0519 22:01:07.652106 3496 log.go:172] (0xc000a3e9a0) (0xc000a66280) Stream removed, broadcasting: 1\nI0519 22:01:07.652142 3496 log.go:172] (0xc000a3e9a0) (0xc000a66320) Stream removed, broadcasting: 3\nI0519 22:01:07.652161 3496 log.go:172] (0xc000a3e9a0) (0xc000782000) Stream removed, broadcasting: 5\nI0519 22:01:07.652177 3496 log.go:172] (0xc000a3e9a0) (0xc0007b6000) Stream removed, broadcasting: 7\n" May 19 22:01:08.103: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:10.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9634" for this suite. • [SLOW TEST:10.445 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":201,"skipped":3444,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:10.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 19 22:01:10.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 19 22:01:10.916: INFO: stderr: "" May 19 22:01:10.916: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:10.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1926" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":202,"skipped":3458,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:10.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-60b2bac4-2fc7-4d7a-b173-805a6f1b5ff7 STEP: Creating a pod to test consume secrets May 19 22:01:11.475: INFO: Waiting up to 5m0s for pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e" in namespace "secrets-7313" to be "success or failure" May 19 22:01:11.700: INFO: Pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e": Phase="Pending", Reason="", readiness=false. Elapsed: 225.045056ms May 19 22:01:13.704: INFO: Pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229644952s May 19 22:01:15.712: INFO: Pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236838895s May 19 22:01:17.717: INFO: Pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24220404s STEP: Saw pod success May 19 22:01:17.717: INFO: Pod "pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e" satisfied condition "success or failure" May 19 22:01:17.720: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e container secret-volume-test: STEP: delete the pod May 19 22:01:17.790: INFO: Waiting for pod pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e to disappear May 19 22:01:17.800: INFO: Pod pod-secrets-4449e1e6-cdde-40d2-8653-220a8c3d847e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:17.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7313" for this suite. • [SLOW TEST:6.885 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3461,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:17.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 19 22:01:17.867: INFO: Waiting up to 5m0s for pod "downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f" in namespace "downward-api-2064" to be "success or failure" May 19 22:01:17.872: INFO: Pod "downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043082ms May 19 22:01:19.876: INFO: Pod "downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009394564s May 19 22:01:21.880: INFO: Pod "downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013457665s STEP: Saw pod success May 19 22:01:21.880: INFO: Pod "downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f" satisfied condition "success or failure" May 19 22:01:21.884: INFO: Trying to get logs from node jerma-worker2 pod downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f container dapi-container: STEP: delete the pod May 19 22:01:21.919: INFO: Waiting for pod downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f to disappear May 19 22:01:21.948: INFO: Pod downward-api-67bfd262-1d2c-4b01-87db-e9a5c9e71d8f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:21.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2064" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3470,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:21.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c777ee0b-3cb9-453f-9d28-4dd03b91d0c1 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c777ee0b-3cb9-453f-9d28-4dd03b91d0c1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:28.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-469" for this suite. • [SLOW TEST:6.384 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3474,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:28.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 19 22:01:28.424: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:35.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9887" for this suite. • [SLOW TEST:7.535 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":206,"skipped":3488,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:35.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-32477796-563a-423f-9f0c-d656f884f874 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:42.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7014" for this suite. • [SLOW TEST:6.887 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3501,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:42.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 22:01:47.364: INFO: Successfully updated pod "pod-update-4603a0be-ac22-45d7-8178-ab2372cf0542" STEP: verifying the updated pod is in kubernetes May 19 22:01:47.412: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:47.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2975" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3509,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:47.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 22:01:47.472: INFO: Waiting up to 5m0s for pod "pod-3b2a696b-16d4-4db4-bf30-090df34071e7" in namespace "emptydir-9934" to be "success or failure" May 19 22:01:47.475: INFO: Pod "pod-3b2a696b-16d4-4db4-bf30-090df34071e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.303536ms May 19 22:01:49.479: INFO: Pod "pod-3b2a696b-16d4-4db4-bf30-090df34071e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450657s May 19 22:01:51.483: INFO: Pod "pod-3b2a696b-16d4-4db4-bf30-090df34071e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011431011s STEP: Saw pod success May 19 22:01:51.483: INFO: Pod "pod-3b2a696b-16d4-4db4-bf30-090df34071e7" satisfied condition "success or failure" May 19 22:01:51.486: INFO: Trying to get logs from node jerma-worker2 pod pod-3b2a696b-16d4-4db4-bf30-090df34071e7 container test-container: STEP: delete the pod May 19 22:01:51.503: INFO: Waiting for pod pod-3b2a696b-16d4-4db4-bf30-090df34071e7 to disappear May 19 22:01:51.519: INFO: Pod pod-3b2a696b-16d4-4db4-bf30-090df34071e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:51.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9934" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3511,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:51.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-23cd4eb3-d5cf-4ec2-817e-888223eb4376 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-23cd4eb3-d5cf-4ec2-817e-888223eb4376 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:01:57.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8901" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:01:57.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:01:57.766: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 19 22:01:57.801: INFO: Pod name sample-pod: Found 0 pods out of 1 May 19 22:02:02.821: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 22:02:02.821: INFO: Creating deployment "test-rolling-update-deployment" May 19 22:02:02.824: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 19 22:02:02.838: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 19 22:02:05.016: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 19 22:02:05.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522523, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522523, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522523, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522522, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 22:02:07.028: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 19 22:02:07.037: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4852 /apis/apps/v1/namespaces/deployment-4852/deployments/test-rolling-update-deployment 79918ace-7add-44ef-8fad-fd8b51d6895c 17543794 1 2020-05-19 22:02:02 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e36ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 22:02:03 +0000 UTC,LastTransitionTime:2020-05-19 22:02:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-19 22:02:06 +0000 UTC,LastTransitionTime:2020-05-19 22:02:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 22:02:07.039: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4852 /apis/apps/v1/namespaces/deployment-4852/replicasets/test-rolling-update-deployment-67cf4f6444 4f286968-580c-460d-a81a-9b9b9a071434 17543783 1 2020-05-19 22:02:02 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 79918ace-7add-44ef-8fad-fd8b51d6895c 0xc002e37047 0xc002e37048}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e370b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 22:02:07.039: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 19 22:02:07.039: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4852 /apis/apps/v1/namespaces/deployment-4852/replicasets/test-rolling-update-controller 512de637-042c-4119-859d-fa6f48d8280d 17543793 2 2020-05-19 22:01:57 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 79918ace-7add-44ef-8fad-fd8b51d6895c 0xc002e36f77 0xc002e36f78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e36fd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 22:02:07.042: INFO: Pod "test-rolling-update-deployment-67cf4f6444-js8tg" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-js8tg test-rolling-update-deployment-67cf4f6444- deployment-4852 /api/v1/namespaces/deployment-4852/pods/test-rolling-update-deployment-67cf4f6444-js8tg 63f90477-afd5-44a3-a321-ed549b84897c 17543782 0 2020-05-19 22:02:02 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 4f286968-580c-460d-a81a-9b9b9a071434 0xc005b39fe7 0xc005b39fe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zkf6s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zkf6s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zkf6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:02:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:02:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:02:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 22:02:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.120,StartTime:2020-05-19 22:02:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 22:02:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1e898b1a027006d57d3ab725f39e062078dec1aa462762b5be1c9e5c80596515,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:07.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4852" for this suite. • [SLOW TEST:9.366 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":211,"skipped":3563,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:07.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 22:02:07.107: INFO: Waiting up to 5m0s for pod "pod-b9570bf1-41fc-4da8-8492-b0a0931c1017" in namespace "emptydir-6639" to be "success or failure" May 19 22:02:07.155: INFO: Pod "pod-b9570bf1-41fc-4da8-8492-b0a0931c1017": Phase="Pending", Reason="", readiness=false. Elapsed: 48.009622ms May 19 22:02:09.185: INFO: Pod "pod-b9570bf1-41fc-4da8-8492-b0a0931c1017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077957766s May 19 22:02:11.190: INFO: Pod "pod-b9570bf1-41fc-4da8-8492-b0a0931c1017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08227841s STEP: Saw pod success May 19 22:02:11.190: INFO: Pod "pod-b9570bf1-41fc-4da8-8492-b0a0931c1017" satisfied condition "success or failure" May 19 22:02:11.193: INFO: Trying to get logs from node jerma-worker2 pod pod-b9570bf1-41fc-4da8-8492-b0a0931c1017 container test-container: STEP: delete the pod May 19 22:02:11.474: INFO: Waiting for pod pod-b9570bf1-41fc-4da8-8492-b0a0931c1017 to disappear May 19 22:02:11.476: INFO: Pod pod-b9570bf1-41fc-4da8-8492-b0a0931c1017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:11.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6639" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:11.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 22:02:11.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693" in namespace "projected-7016" to be "success or failure" May 19 22:02:11.556: INFO: Pod "downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366264ms May 19 22:02:13.773: INFO: Pod "downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220108023s May 19 22:02:15.776: INFO: Pod "downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.22383564s STEP: Saw pod success May 19 22:02:15.776: INFO: Pod "downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693" satisfied condition "success or failure" May 19 22:02:15.779: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693 container client-container: STEP: delete the pod May 19 22:02:15.852: INFO: Waiting for pod downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693 to disappear May 19 22:02:15.862: INFO: Pod downwardapi-volume-0b59e2dd-8a28-4433-817b-671a0b868693 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:15.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7016" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:15.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:27.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5710" for this suite. • [SLOW TEST:11.177 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":214,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:27.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 22:02:28.122: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 22:02:30.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522548, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522548, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522548, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522548, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:02:33.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:33.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5583" for this suite. STEP: Destroying namespace "webhook-5583-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.254 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":215,"skipped":3671,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:33.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9750519a-bbde-47a4-b71b-27a9bf35607c STEP: Creating a pod to test consume configMaps May 19 22:02:33.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12" in namespace "configmap-4699" to be "success or failure" May 19 22:02:33.436: INFO: Pod "pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12": Phase="Pending", Reason="", readiness=false. Elapsed: 14.745544ms May 19 22:02:35.439: INFO: Pod "pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01815105s May 19 22:02:37.443: INFO: Pod "pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022146347s STEP: Saw pod success May 19 22:02:37.443: INFO: Pod "pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12" satisfied condition "success or failure" May 19 22:02:37.446: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12 container configmap-volume-test: STEP: delete the pod May 19 22:02:37.505: INFO: Waiting for pod pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12 to disappear May 19 22:02:37.581: INFO: Pod pod-configmaps-e0c0fcd8-f2e3-4a03-906d-9ff7b178ab12 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:37.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4699" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3678,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:37.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-297d3606-80a2-4a08-9ec1-937beeb7dd11 STEP: Creating a pod to test consume configMaps May 19 22:02:37.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60" in namespace "configmap-3024" to be "success or failure" May 19 22:02:37.707: INFO: Pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60": Phase="Pending", Reason="", readiness=false. Elapsed: 26.150477ms May 19 22:02:39.737: INFO: Pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056033632s May 19 22:02:41.740: INFO: Pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60": Phase="Running", Reason="", readiness=true. Elapsed: 4.059738504s May 19 22:02:43.745: INFO: Pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06472879s STEP: Saw pod success May 19 22:02:43.745: INFO: Pod "pod-configmaps-432a761c-961d-4a12-bc21-460553801b60" satisfied condition "success or failure" May 19 22:02:43.748: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-432a761c-961d-4a12-bc21-460553801b60 container configmap-volume-test: STEP: delete the pod May 19 22:02:43.767: INFO: Waiting for pod pod-configmaps-432a761c-961d-4a12-bc21-460553801b60 to disappear May 19 22:02:43.771: INFO: Pod pod-configmaps-432a761c-961d-4a12-bc21-460553801b60 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:43.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3024" for this suite. • [SLOW TEST:6.189 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3678,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:43.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 22:02:48.461: INFO: Successfully updated pod "pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe" May 19 22:02:48.461: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe" in namespace "pods-2547" to be "terminated due to deadline exceeded" May 19 22:02:48.466: INFO: Pod "pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe": Phase="Running", Reason="", readiness=true. Elapsed: 4.924955ms May 19 22:02:50.471: INFO: Pod "pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009412256s May 19 22:02:50.471: INFO: Pod "pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:50.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2547" for this suite. • [SLOW TEST:6.702 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3687,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:50.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 19 22:02:50.553: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 22:02:50.563: INFO: Waiting for terminating namespaces to be deleted... May 19 22:02:50.566: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 19 22:02:50.571: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 22:02:50.571: INFO: Container kindnet-cni ready: true, restart count 0 May 19 22:02:50.571: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 22:02:50.571: INFO: Container kube-proxy ready: true, restart count 0 May 19 22:02:50.571: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 19 22:02:50.577: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 19 22:02:50.577: INFO: Container kube-hunter ready: false, restart count 0 May 19 22:02:50.577: INFO: pod-update-activedeadlineseconds-74148bb6-4d57-4259-af86-f02f0db5fefe from pods-2547 started at 2020-05-19 22:02:43 +0000 UTC (1 container statuses recorded) May 19 22:02:50.577: INFO: Container nginx ready: false, restart count 0 May 19 22:02:50.577: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 22:02:50.577: INFO: Container kindnet-cni ready: true, restart count 0 May 19 22:02:50.577: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 19 22:02:50.577: INFO: Container kube-bench ready: false, restart count 0 May 19 22:02:50.577: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 19 22:02:50.577: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f4630263-7bdc-4ad6-89b9-16edf5affe09 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f4630263-7bdc-4ad6-89b9-16edf5affe09 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f4630263-7bdc-4ad6-89b9-16edf5affe09 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:02:58.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-65" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.256 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":219,"skipped":3695,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:02:58.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-633e0074-7482-49a5-920c-468b233f01e5 STEP: Creating a pod to test consume secrets May 19 22:02:58.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31" in namespace "projected-5046" to be "success or failure" May 19 22:02:58.860: INFO: Pod "pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31": Phase="Pending", Reason="", readiness=false. Elapsed: 18.195175ms May 19 22:03:00.865: INFO: Pod "pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022328844s May 19 22:03:02.869: INFO: Pod "pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026301564s STEP: Saw pod success May 19 22:03:02.869: INFO: Pod "pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31" satisfied condition "success or failure" May 19 22:03:02.871: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31 container projected-secret-volume-test: STEP: delete the pod May 19 22:03:02.893: INFO: Waiting for pod pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31 to disappear May 19 22:03:02.903: INFO: Pod pod-projected-secrets-2cd1ca97-2628-4d3c-8566-da6cc1d49c31 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:03:02.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5046" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3701,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:03:02.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-3dd31428-3c3a-43c5-b94a-953ec08c3437 in namespace container-probe-1356 May 19 22:03:07.063: INFO: Started pod test-webserver-3dd31428-3c3a-43c5-b94a-953ec08c3437 in namespace container-probe-1356 STEP: checking the pod's current state and verifying that restartCount is present May 19 22:03:07.067: INFO: Initial restart count of pod test-webserver-3dd31428-3c3a-43c5-b94a-953ec08c3437 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:07:07.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1356" for this suite. • [SLOW TEST:245.106 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:07:08.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 19 22:07:08.271: INFO: Waiting up to 5m0s for pod "client-containers-c8e72187-b958-433e-ae75-d3c3016ff054" in namespace "containers-7886" to be "success or failure" May 19 22:07:08.293: INFO: Pod "client-containers-c8e72187-b958-433e-ae75-d3c3016ff054": Phase="Pending", Reason="", readiness=false. Elapsed: 21.824165ms May 19 22:07:10.296: INFO: Pod "client-containers-c8e72187-b958-433e-ae75-d3c3016ff054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025281223s May 19 22:07:12.300: INFO: Pod "client-containers-c8e72187-b958-433e-ae75-d3c3016ff054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028968187s STEP: Saw pod success May 19 22:07:12.300: INFO: Pod "client-containers-c8e72187-b958-433e-ae75-d3c3016ff054" satisfied condition "success or failure" May 19 22:07:12.303: INFO: Trying to get logs from node jerma-worker2 pod client-containers-c8e72187-b958-433e-ae75-d3c3016ff054 container test-container: STEP: delete the pod May 19 22:07:12.359: INFO: Waiting for pod client-containers-c8e72187-b958-433e-ae75-d3c3016ff054 to disappear May 19 22:07:12.400: INFO: Pod client-containers-c8e72187-b958-433e-ae75-d3c3016ff054 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:07:12.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7886" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3719,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:07:12.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 19 22:07:12.534: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:07:28.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2352" for this suite. • [SLOW TEST:16.212 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":223,"skipped":3721,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:07:28.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:07:45.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2296" for this suite. • [SLOW TEST:17.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":224,"skipped":3727,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:07:45.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e May 19 22:07:45.838: INFO: Pod name my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e: Found 0 pods out of 1 May 19 22:07:50.844: INFO: Pod name my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e: Found 1 pods out of 1 May 19 22:07:50.844: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e" are running May 19 22:07:50.846: INFO: Pod "my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e-pt7qp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:07:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:07:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:07:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:07:45 +0000 UTC Reason: Message:}]) May 19 22:07:50.846: INFO: Trying to dial the pod May 19 22:07:55.855: INFO: Controller my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e: Got expected result from replica 1 [my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e-pt7qp]: "my-hostname-basic-ad43f9ac-9ca1-4e1a-8265-dc6669dd471e-pt7qp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:07:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8368" for this suite. • [SLOW TEST:10.141 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":225,"skipped":3729,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:07:55.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 19 22:07:55.958: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:11.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8806" for this suite. • [SLOW TEST:15.732 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":226,"skipped":3734,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:11.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 22:08:11.722: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607" in namespace "downward-api-3189" to be "success or failure" May 19 22:08:11.754: INFO: Pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607": Phase="Pending", Reason="", readiness=false. Elapsed: 31.851195ms May 19 22:08:13.758: INFO: Pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0360284s May 19 22:08:15.763: INFO: Pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607": Phase="Running", Reason="", readiness=true. Elapsed: 4.040961219s May 19 22:08:17.768: INFO: Pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045560984s STEP: Saw pod success May 19 22:08:17.768: INFO: Pod "downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607" satisfied condition "success or failure" May 19 22:08:17.771: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607 container client-container: STEP: delete the pod May 19 22:08:17.806: INFO: Waiting for pod downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607 to disappear May 19 22:08:17.838: INFO: Pod downwardapi-volume-b486953e-a8e1-4d59-8bc7-9bf3785fc607 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:17.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3189" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:17.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 19 22:08:18.014: INFO: Waiting up to 5m0s for pod "var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761" in namespace "var-expansion-8522" to be "success or failure" May 19 22:08:18.018: INFO: Pod "var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668966ms May 19 22:08:20.022: INFO: Pod "var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00770482s May 19 22:08:22.183: INFO: Pod "var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168825824s STEP: Saw pod success May 19 22:08:22.183: INFO: Pod "var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761" satisfied condition "success or failure" May 19 22:08:22.186: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761 container dapi-container: STEP: delete the pod May 19 22:08:22.246: INFO: Waiting for pod var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761 to disappear May 19 22:08:22.258: INFO: Pod var-expansion-4e733fa9-42ac-49f5-ab94-4ac6e22f3761 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:22.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8522" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3769,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:22.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-9fb632ec-6c76-426b-b283-d211554eebfc [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:22.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8191" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":229,"skipped":3789,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:22.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:26.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4033" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3809,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:26.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c7c94aef-58b9-49fd-a837-f07d058d4e84 STEP: Creating a pod to test consume configMaps May 19 22:08:26.637: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4" in namespace "configmap-415" to be "success or failure" May 19 22:08:26.661: INFO: Pod "pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.005362ms May 19 22:08:28.718: INFO: Pod "pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080863991s May 19 22:08:30.723: INFO: Pod "pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08557328s STEP: Saw pod success May 19 22:08:30.723: INFO: Pod "pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4" satisfied condition "success or failure" May 19 22:08:30.726: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4 container configmap-volume-test: STEP: delete the pod May 19 22:08:30.777: INFO: Waiting for pod pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4 to disappear May 19 22:08:30.785: INFO: Pod pod-configmaps-fc2287b0-dbf0-4953-ba98-29af2f5030f4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:30.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-415" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:30.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 22:08:31.309: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 22:08:33.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522911, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522911, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522911, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522911, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:08:36.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:37.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5641" for this suite. STEP: Destroying namespace "webhook-5641-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":232,"skipped":3853,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:37.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 22:08:37.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d" in namespace "downward-api-3858" to be "success or failure" May 19 22:08:37.240: INFO: Pod "downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.856674ms May 19 22:08:39.244: INFO: Pod "downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023436289s May 19 22:08:41.247: INFO: Pod "downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02700864s STEP: Saw pod success May 19 22:08:41.248: INFO: Pod "downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d" satisfied condition "success or failure" May 19 22:08:41.250: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d container client-container: STEP: delete the pod May 19 22:08:41.373: INFO: Waiting for pod downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d to disappear May 19 22:08:41.396: INFO: Pod downwardapi-volume-cfb9bdb9-6b78-4dc7-9ac2-0584cf3ffd4d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3858" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3856,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:41.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:08:57.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3683" for this suite. • [SLOW TEST:16.557 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":234,"skipped":3871,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:08:57.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 19 22:08:58.021: INFO: PodSpec: initContainers in spec.initContainers May 19 22:09:54.050: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-02ffec1d-7807-4a45-a16c-0a1ed22bccba", GenerateName:"", Namespace:"init-container-2450", SelfLink:"/api/v1/namespaces/init-container-2450/pods/pod-init-02ffec1d-7807-4a45-a16c-0a1ed22bccba", UID:"12e629ec-99ac-45ef-9fcd-1fac0350a212", ResourceVersion:"17545899", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725522938, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"21350898"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-g8ghz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006630f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ghz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ghz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8ghz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00377d238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002bfbec0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00377d2d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00377d2f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00377d2f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00377d2fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522938, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522938, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522938, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725522938, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.134", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.134"}}, StartTime:(*v1.Time)(0xc003cf75c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003cf7620), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029143f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e45f2728f9783f64d0be43ea4a870e816402e05bceb666ea614c50e2f22bf7df", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003cf7640), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003cf7600), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00377d38f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:09:54.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2450" for this suite. • [SLOW TEST:56.136 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":235,"skipped":3874,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:09:54.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3376 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3376;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3376 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3376;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3376.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3376.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3376.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3376.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3376.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3376.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3376.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.75.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.75.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.75.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.75.178_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3376 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3376;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3376 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3376;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3376.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3376.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3376.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3376.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3376.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3376.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3376.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3376.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3376.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.75.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.75.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.75.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.75.178_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 22:10:02.414: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.417: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.422: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.426: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.429: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.431: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.453: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.456: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.459: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.462: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.465: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.468: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.474: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:02.492: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:07.498: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.501: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.507: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.511: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.514: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.536: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.538: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.541: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.543: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.546: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.550: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:07.569: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:12.503: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.506: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.508: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.510: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.512: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.514: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.516: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.518: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.534: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.536: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.539: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.544: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.551: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.554: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:12.571: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:17.540: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.544: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.552: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.555: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.557: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.560: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.578: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.580: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.583: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.586: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.589: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.594: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.597: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:17.621: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:22.497: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.500: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.506: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.508: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.511: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.515: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.517: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.539: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.543: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.546: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.548: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.551: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.554: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.557: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:22.576: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:27.504: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.507: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.509: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.517: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.519: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.521: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.536: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.539: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.541: INFO: Unable to read jessie_udp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.543: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376 from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.546: INFO: Unable to read jessie_udp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.549: INFO: Unable to read jessie_tcp@dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.551: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.553: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc from pod dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d: the server could not find the requested resource (get pods dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d) May 19 22:10:27.568: INFO: Lookups using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3376 wheezy_tcp@dns-test-service.dns-3376 wheezy_udp@dns-test-service.dns-3376.svc wheezy_tcp@dns-test-service.dns-3376.svc wheezy_udp@_http._tcp.dns-test-service.dns-3376.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3376.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3376 jessie_tcp@dns-test-service.dns-3376 jessie_udp@dns-test-service.dns-3376.svc jessie_tcp@dns-test-service.dns-3376.svc jessie_udp@_http._tcp.dns-test-service.dns-3376.svc jessie_tcp@_http._tcp.dns-test-service.dns-3376.svc] May 19 22:10:32.600: INFO: DNS probes using dns-3376/dns-test-0bf84d1a-56c6-4224-923e-962867d89b8d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:10:33.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3376" for this suite. • [SLOW TEST:39.454 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":236,"skipped":3882,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:10:33.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 22:10:37.695: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:10:37.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1397" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3900,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:10:37.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-ce3d55e2-7b42-4c0e-b700-7e8b4d0ccc95 STEP: Creating a pod to test consume secrets May 19 22:10:38.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f" in namespace "projected-8322" to be "success or failure" May 19 22:10:38.111: INFO: Pod "pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.577292ms May 19 22:10:40.337: INFO: Pod "pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230264422s May 19 22:10:42.342: INFO: Pod "pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234633272s STEP: Saw pod success May 19 22:10:42.342: INFO: Pod "pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f" satisfied condition "success or failure" May 19 22:10:42.344: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f container secret-volume-test: STEP: delete the pod May 19 22:10:42.413: INFO: Waiting for pod pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f to disappear May 19 22:10:42.422: INFO: Pod pod-projected-secrets-e2d33da1-8db0-4262-a952-08dfcc57b57f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:10:42.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8322" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:10:42.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:10:42.465: INFO: Creating ReplicaSet my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06 May 19 22:10:42.488: INFO: Pod name my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06: Found 0 pods out of 1 May 19 22:10:47.510: INFO: Pod name my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06: Found 1 pods out of 1 May 19 22:10:47.510: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06" is running May 19 22:10:47.513: INFO: Pod "my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06-xp2k2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:10:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:10:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:10:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 22:10:42 +0000 UTC Reason: Message:}]) May 19 22:10:47.513: INFO: Trying to dial the pod May 19 22:10:52.526: INFO: Controller my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06: Got expected result from replica 1 [my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06-xp2k2]: "my-hostname-basic-f41101e4-fec6-41b7-9946-f11b5c946b06-xp2k2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:10:52.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7995" for this suite. • [SLOW TEST:10.115 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":239,"skipped":3925,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:10:52.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 22:10:53.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 22:10:55.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523053, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523053, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523053, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523053, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:10:58.302: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:10:58.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4682-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:10:59.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2474" for this suite. STEP: Destroying namespace "webhook-2474-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.147 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":240,"skipped":3944,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:10:59.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 22:11:03.873: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:03.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1749" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3955,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:04.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:04.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9146" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":242,"skipped":3959,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:04.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-75c3cbfb-1d30-428e-aa43-2d5f8d426b13 STEP: Creating a pod to test consume configMaps May 19 22:11:04.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2" in namespace "configmap-529" to be "success or failure" May 19 22:11:04.318: INFO: Pod "pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.273451ms May 19 22:11:06.328: INFO: Pod "pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028751296s May 19 22:11:08.333: INFO: Pod "pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034015718s STEP: Saw pod success May 19 22:11:08.333: INFO: Pod "pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2" satisfied condition "success or failure" May 19 22:11:08.337: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2 container configmap-volume-test: STEP: delete the pod May 19 22:11:08.352: INFO: Waiting for pod pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2 to disappear May 19 22:11:08.369: INFO: Pod pod-configmaps-2273c58a-7b45-411b-a80b-fc1ff61973c2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:08.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-529" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:08.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 19 22:11:08.720: INFO: Waiting up to 5m0s for pod "downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86" in namespace "downward-api-4843" to be "success or failure" May 19 22:11:08.756: INFO: Pod "downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86": Phase="Pending", Reason="", readiness=false. Elapsed: 36.518036ms May 19 22:11:10.804: INFO: Pod "downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084350244s May 19 22:11:12.879: INFO: Pod "downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158895266s STEP: Saw pod success May 19 22:11:12.879: INFO: Pod "downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86" satisfied condition "success or failure" May 19 22:11:12.881: INFO: Trying to get logs from node jerma-worker pod downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86 container dapi-container: STEP: delete the pod May 19 22:11:13.019: INFO: Waiting for pod downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86 to disappear May 19 22:11:13.023: INFO: Pod downward-api-d95bde4b-e2cf-46b9-a92e-e8f3c1061b86 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:13.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4843" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4017,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:13.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-acbc6448-ea57-4bd0-97bf-6e1ad95e18ea STEP: Creating a pod to test consume secrets May 19 22:11:13.123: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222" in namespace "projected-4374" to be "success or failure" May 19 22:11:13.131: INFO: Pod "pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222": Phase="Pending", Reason="", readiness=false. Elapsed: 7.311746ms May 19 22:11:15.170: INFO: Pod "pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046310448s May 19 22:11:17.175: INFO: Pod "pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051097923s STEP: Saw pod success May 19 22:11:17.175: INFO: Pod "pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222" satisfied condition "success or failure" May 19 22:11:17.178: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222 container projected-secret-volume-test: STEP: delete the pod May 19 22:11:17.217: INFO: Waiting for pod pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222 to disappear May 19 22:11:17.219: INFO: Pod pod-projected-secrets-a2ac4362-b162-4c8b-b4b9-c7c1f48e5222 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:17.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4374" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4024,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:17.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 22:11:17.511: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:17.539: INFO: Number of nodes with available pods: 0 May 19 22:11:17.539: INFO: Node jerma-worker is running more than one daemon pod May 19 22:11:18.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:18.547: INFO: Number of nodes with available pods: 0 May 19 22:11:18.547: INFO: Node jerma-worker is running more than one daemon pod May 19 22:11:19.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:19.623: INFO: Number of nodes with available pods: 0 May 19 22:11:19.623: INFO: Node jerma-worker is running more than one daemon pod May 19 22:11:20.663: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:20.666: INFO: Number of nodes with available pods: 0 May 19 22:11:20.666: INFO: Node jerma-worker is running more than one daemon pod May 19 22:11:21.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:21.546: INFO: Number of nodes with available pods: 1 May 19 22:11:21.546: INFO: Node jerma-worker is running more than one daemon pod May 19 22:11:22.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:22.548: INFO: Number of nodes with available pods: 2 May 19 22:11:22.548: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 19 22:11:22.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:22.630: INFO: Number of nodes with available pods: 1 May 19 22:11:22.630: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:23.655: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:23.700: INFO: Number of nodes with available pods: 1 May 19 22:11:23.700: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:24.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:24.639: INFO: Number of nodes with available pods: 1 May 19 22:11:24.639: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:25.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:25.640: INFO: Number of nodes with available pods: 1 May 19 22:11:25.640: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:26.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:26.638: INFO: Number of nodes with available pods: 1 May 19 22:11:26.639: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:27.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:27.639: INFO: Number of nodes with available pods: 1 May 19 22:11:27.639: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:28.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:28.639: INFO: Number of nodes with available pods: 1 May 19 22:11:28.639: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:29.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:29.640: INFO: Number of nodes with available pods: 1 May 19 22:11:29.640: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:30.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:30.638: INFO: Number of nodes with available pods: 1 May 19 22:11:30.638: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:31.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:31.639: INFO: Number of nodes with available pods: 1 May 19 22:11:31.639: INFO: Node jerma-worker2 is running more than one daemon pod May 19 22:11:32.635: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 22:11:32.639: INFO: Number of nodes with available pods: 2 May 19 22:11:32.639: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9449, will wait for the garbage collector to delete the pods May 19 22:11:32.702: INFO: Deleting DaemonSet.extensions daemon-set took: 6.796448ms May 19 22:11:33.102: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.266192ms May 19 22:11:39.517: INFO: Number of nodes with available pods: 0 May 19 22:11:39.517: INFO: Number of running nodes: 0, number of available pods: 0 May 19 22:11:39.520: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9449/daemonsets","resourceVersion":"17546595"},"items":null} May 19 22:11:39.522: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9449/pods","resourceVersion":"17546595"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:39.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9449" for this suite. • [SLOW TEST:22.317 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":246,"skipped":4026,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:39.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 22:11:40.174: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 22:11:42.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523100, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523100, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523100, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523100, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:11:45.281: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:45.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1820" for this suite. STEP: Destroying namespace "webhook-1820-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.828 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":247,"skipped":4032,"failed":0} [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:46.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d3963b2d-7e4a-47a9-858b-b695fe7d8d4a STEP: Creating a pod to test consume secrets May 19 22:11:46.453: INFO: Waiting up to 5m0s for pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f" in namespace "secrets-5856" to be "success or failure" May 19 22:11:46.457: INFO: Pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601373ms May 19 22:11:48.462: INFO: Pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008596596s May 19 22:11:50.466: INFO: Pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f": Phase="Running", Reason="", readiness=true. Elapsed: 4.01297004s May 19 22:11:52.471: INFO: Pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017655132s STEP: Saw pod success May 19 22:11:52.471: INFO: Pod "pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f" satisfied condition "success or failure" May 19 22:11:52.474: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f container secret-volume-test: STEP: delete the pod May 19 22:11:52.507: INFO: Waiting for pod pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f to disappear May 19 22:11:52.516: INFO: Pod pod-secrets-c5e7d67a-6297-4d20-81fd-99d8451c805f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:11:52.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5856" for this suite. • [SLOW TEST:6.153 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:11:52.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 19 22:11:57.121: INFO: Successfully updated pod "labelsupdate0f6f4a92-606a-4077-bc42-522cd5d8c9e5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:12:01.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6077" for this suite. • [SLOW TEST:8.647 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4067,"failed":0} [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:12:01.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-dc71768e-3801-4629-9429-337e2a36e363 STEP: Creating configMap with name cm-test-opt-upd-51b1ec9a-f7ad-4b2e-aef0-3c32daba7093 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dc71768e-3801-4629-9429-337e2a36e363 STEP: Updating configmap cm-test-opt-upd-51b1ec9a-f7ad-4b2e-aef0-3c32daba7093 STEP: Creating configMap with name cm-test-opt-create-feecfb5e-e8ab-4fda-8a1e-657f509a6fda STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:13:37.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5996" for this suite. • [SLOW TEST:96.716 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4067,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:13:37.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 22:13:38.291: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 22:13:40.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 22:13:42.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523218, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:13:45.405: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:13:55.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2292" for this suite. STEP: Destroying namespace "webhook-2292-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.858 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":251,"skipped":4081,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:13:55.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:13:55.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5751" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":252,"skipped":4082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:13:55.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-aa739896-f533-480c-b8e5-0e1f97480804 in namespace container-probe-8853 May 19 22:14:00.021: INFO: Started pod busybox-aa739896-f533-480c-b8e5-0e1f97480804 in namespace container-probe-8853 STEP: checking the pod's current state and verifying that restartCount is present May 19 22:14:00.023: INFO: Initial restart count of pod busybox-aa739896-f533-480c-b8e5-0e1f97480804 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:01.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8853" for this suite. • [SLOW TEST:245.355 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4107,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:01.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7705/configmap-test-71946bd0-4fd0-4e23-8011-b232991a7f81 STEP: Creating a pod to test consume configMaps May 19 22:18:01.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61" in namespace "configmap-7705" to be "success or failure" May 19 22:18:01.346: INFO: Pod "pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615728ms May 19 22:18:03.351: INFO: Pod "pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007912734s May 19 22:18:05.355: INFO: Pod "pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011902952s STEP: Saw pod success May 19 22:18:05.355: INFO: Pod "pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61" satisfied condition "success or failure" May 19 22:18:05.358: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61 container env-test: STEP: delete the pod May 19 22:18:05.396: INFO: Waiting for pod pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61 to disappear May 19 22:18:05.401: INFO: Pod pod-configmaps-e63bfe45-f5ac-40a1-bfbf-8ba54f9a5a61 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:05.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7705" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4109,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:05.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3165.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3165.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 22:18:11.800: INFO: DNS probes using dns-3165/dns-test-1238a835-e08e-4444-b552-5566ae00e151 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:11.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3165" for this suite. • [SLOW TEST:6.760 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":255,"skipped":4119,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:12.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:19.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-23" for this suite. • [SLOW TEST:7.447 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":256,"skipped":4124,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:19.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:18:39.753: INFO: Container started at 2020-05-19 22:18:22 +0000 UTC, pod became ready at 2020-05-19 22:18:38 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:39.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1822" for this suite. • [SLOW TEST:20.146 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:39.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 19 22:18:39.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-156' May 19 22:18:43.024: INFO: stderr: "" May 19 22:18:43.024: INFO: stdout: "pod/pause created\n" May 19 22:18:43.024: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 19 22:18:43.024: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-156" to be "running and ready" May 19 22:18:43.062: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.658902ms May 19 22:18:45.147: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122773147s May 19 22:18:47.150: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12636418s May 19 22:18:49.155: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.130758674s May 19 22:18:49.155: INFO: Pod "pause" satisfied condition "running and ready" May 19 22:18:49.155: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 19 22:18:49.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-156' May 19 22:18:49.270: INFO: stderr: "" May 19 22:18:49.270: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 19 22:18:49.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-156' May 19 22:18:49.370: INFO: stderr: "" May 19 22:18:49.370: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod May 19 22:18:49.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-156' May 19 22:18:49.482: INFO: stderr: "" May 19 22:18:49.482: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 19 22:18:49.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-156' May 19 22:18:49.574: INFO: stderr: "" May 19 22:18:49.574: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 19 22:18:49.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-156' May 19 22:18:49.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 22:18:49.715: INFO: stdout: "pod \"pause\" force deleted\n" May 19 22:18:49.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-156' May 19 22:18:49.816: INFO: stderr: "No resources found in kubectl-156 namespace.\n" May 19 22:18:49.816: INFO: stdout: "" May 19 22:18:49.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-156 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 22:18:49.916: INFO: stderr: "" May 19 22:18:49.916: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:49.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-156" for this suite. • [SLOW TEST:10.162 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":258,"skipped":4142,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:49.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6352.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6352.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 22:18:56.396: INFO: DNS probes using dns-6352/dns-test-c2806426-a498-4c9e-8ee0-a35765c75f2e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:18:56.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6352" for this suite. • [SLOW TEST:6.576 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":259,"skipped":4158,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:18:56.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:19:00.984: INFO: Waiting up to 5m0s for pod "client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449" in namespace "pods-274" to be "success or failure" May 19 22:19:01.002: INFO: Pod "client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449": Phase="Pending", Reason="", readiness=false. Elapsed: 18.682333ms May 19 22:19:03.006: INFO: Pod "client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022578684s May 19 22:19:05.010: INFO: Pod "client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025848226s STEP: Saw pod success May 19 22:19:05.010: INFO: Pod "client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449" satisfied condition "success or failure" May 19 22:19:05.012: INFO: Trying to get logs from node jerma-worker pod client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449 container env3cont: STEP: delete the pod May 19 22:19:05.052: INFO: Waiting for pod client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449 to disappear May 19 22:19:05.061: INFO: Pod client-envvars-e53b6d38-8ef8-4729-b66a-c059c6dcf449 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:05.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-274" for this suite. • [SLOW TEST:8.565 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4222,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:05.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 19 22:19:05.108: INFO: namespace kubectl-7828 May 19 22:19:05.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7828' May 19 22:19:05.381: INFO: stderr: "" May 19 22:19:05.381: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 22:19:06.385: INFO: Selector matched 1 pods for map[app:agnhost] May 19 22:19:06.385: INFO: Found 0 / 1 May 19 22:19:07.385: INFO: Selector matched 1 pods for map[app:agnhost] May 19 22:19:07.385: INFO: Found 0 / 1 May 19 22:19:08.386: INFO: Selector matched 1 pods for map[app:agnhost] May 19 22:19:08.386: INFO: Found 0 / 1 May 19 22:19:09.386: INFO: Selector matched 1 pods for map[app:agnhost] May 19 22:19:09.386: INFO: Found 1 / 1 May 19 22:19:09.386: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 22:19:09.390: INFO: Selector matched 1 pods for map[app:agnhost] May 19 22:19:09.390: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 22:19:09.390: INFO: wait on agnhost-master startup in kubectl-7828 May 19 22:19:09.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-87hg7 agnhost-master --namespace=kubectl-7828' May 19 22:19:09.506: INFO: stderr: "" May 19 22:19:09.506: INFO: stdout: "Paused\n" STEP: exposing RC May 19 22:19:09.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7828' May 19 22:19:09.646: INFO: stderr: "" May 19 22:19:09.646: INFO: stdout: "service/rm2 exposed\n" May 19 22:19:09.654: INFO: Service rm2 in namespace kubectl-7828 found. STEP: exposing service May 19 22:19:11.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7828' May 19 22:19:11.810: INFO: stderr: "" May 19 22:19:11.810: INFO: stdout: "service/rm3 exposed\n" May 19 22:19:11.821: INFO: Service rm3 in namespace kubectl-7828 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:13.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7828" for this suite. • [SLOW TEST:8.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":261,"skipped":4239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:13.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7563 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7563 I0519 22:19:13.991467 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7563, replica count: 2 I0519 22:19:17.041913 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 22:19:20.042185 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 22:19:20.042: INFO: Creating new exec pod May 19 22:19:25.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7563 execpod2s4jx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 19 22:19:25.321: INFO: stderr: "I0519 22:19:25.202892 3802 log.go:172] (0xc00020ed10) (0xc0006a7f40) Create stream\nI0519 22:19:25.202947 3802 log.go:172] (0xc00020ed10) (0xc0006a7f40) Stream added, broadcasting: 1\nI0519 22:19:25.206566 3802 log.go:172] (0xc00020ed10) Reply frame received for 1\nI0519 22:19:25.206626 3802 log.go:172] (0xc00020ed10) (0xc000650820) Create stream\nI0519 22:19:25.206646 3802 log.go:172] (0xc00020ed10) (0xc000650820) Stream added, broadcasting: 3\nI0519 22:19:25.208072 3802 log.go:172] (0xc00020ed10) Reply frame received for 3\nI0519 22:19:25.208147 3802 log.go:172] (0xc00020ed10) (0xc000912000) Create stream\nI0519 22:19:25.208170 3802 log.go:172] (0xc00020ed10) (0xc000912000) Stream added, broadcasting: 5\nI0519 22:19:25.209521 3802 log.go:172] (0xc00020ed10) Reply frame received for 5\nI0519 22:19:25.278573 3802 log.go:172] (0xc00020ed10) Data frame received for 5\nI0519 22:19:25.278601 3802 log.go:172] (0xc000912000) (5) Data frame handling\nI0519 22:19:25.278619 3802 log.go:172] (0xc000912000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0519 22:19:25.313881 3802 log.go:172] (0xc00020ed10) Data frame received for 5\nI0519 22:19:25.313922 3802 log.go:172] (0xc000912000) (5) Data frame handling\nI0519 22:19:25.313957 3802 log.go:172] (0xc000912000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0519 22:19:25.314363 3802 log.go:172] (0xc00020ed10) Data frame received for 5\nI0519 22:19:25.314398 3802 log.go:172] (0xc000912000) (5) Data frame handling\nI0519 22:19:25.314450 3802 log.go:172] (0xc00020ed10) Data frame received for 3\nI0519 22:19:25.314492 3802 log.go:172] (0xc000650820) (3) Data frame handling\nI0519 22:19:25.316246 3802 log.go:172] (0xc00020ed10) Data frame received for 1\nI0519 22:19:25.316283 3802 log.go:172] (0xc0006a7f40) (1) Data frame handling\nI0519 22:19:25.316318 3802 log.go:172] (0xc0006a7f40) (1) Data frame sent\nI0519 22:19:25.316356 3802 log.go:172] (0xc00020ed10) (0xc0006a7f40) Stream removed, broadcasting: 1\nI0519 22:19:25.316399 3802 log.go:172] (0xc00020ed10) Go away received\nI0519 22:19:25.316798 3802 log.go:172] (0xc00020ed10) (0xc0006a7f40) Stream removed, broadcasting: 1\nI0519 22:19:25.316833 3802 log.go:172] (0xc00020ed10) (0xc000650820) Stream removed, broadcasting: 3\nI0519 22:19:25.316845 3802 log.go:172] (0xc00020ed10) (0xc000912000) Stream removed, broadcasting: 5\n" May 19 22:19:25.322: INFO: stdout: "" May 19 22:19:25.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7563 execpod2s4jx -- /bin/sh -x -c nc -zv -t -w 2 10.104.140.118 80' May 19 22:19:25.525: INFO: stderr: "I0519 22:19:25.457592 3824 log.go:172] (0xc000b17290) (0xc000b0a320) Create stream\nI0519 22:19:25.457677 3824 log.go:172] (0xc000b17290) (0xc000b0a320) Stream added, broadcasting: 1\nI0519 22:19:25.462463 3824 log.go:172] (0xc000b17290) Reply frame received for 1\nI0519 22:19:25.462516 3824 log.go:172] (0xc000b17290) (0xc000b0a3c0) Create stream\nI0519 22:19:25.462530 3824 log.go:172] (0xc000b17290) (0xc000b0a3c0) Stream added, broadcasting: 3\nI0519 22:19:25.463688 3824 log.go:172] (0xc000b17290) Reply frame received for 3\nI0519 22:19:25.463727 3824 log.go:172] (0xc000b17290) (0xc000ca6140) Create stream\nI0519 22:19:25.463745 3824 log.go:172] (0xc000b17290) (0xc000ca6140) Stream added, broadcasting: 5\nI0519 22:19:25.464605 3824 log.go:172] (0xc000b17290) Reply frame received for 5\nI0519 22:19:25.518383 3824 log.go:172] (0xc000b17290) Data frame received for 3\nI0519 22:19:25.518433 3824 log.go:172] (0xc000b0a3c0) (3) Data frame handling\nI0519 22:19:25.518462 3824 log.go:172] (0xc000b17290) Data frame received for 5\nI0519 22:19:25.518473 3824 log.go:172] (0xc000ca6140) (5) Data frame handling\nI0519 22:19:25.518489 3824 log.go:172] (0xc000ca6140) (5) Data frame sent\nI0519 22:19:25.518517 3824 log.go:172] (0xc000b17290) Data frame received for 5\nI0519 22:19:25.518529 3824 log.go:172] (0xc000ca6140) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.140.118 80\nConnection to 10.104.140.118 80 port [tcp/http] succeeded!\nI0519 22:19:25.520157 3824 log.go:172] (0xc000b17290) Data frame received for 1\nI0519 22:19:25.520251 3824 log.go:172] (0xc000b0a320) (1) Data frame handling\nI0519 22:19:25.520317 3824 log.go:172] (0xc000b0a320) (1) Data frame sent\nI0519 22:19:25.520350 3824 log.go:172] (0xc000b17290) (0xc000b0a320) Stream removed, broadcasting: 1\nI0519 22:19:25.520377 3824 log.go:172] (0xc000b17290) Go away received\nI0519 22:19:25.520750 3824 log.go:172] (0xc000b17290) (0xc000b0a320) Stream removed, broadcasting: 1\nI0519 22:19:25.520775 3824 log.go:172] (0xc000b17290) (0xc000b0a3c0) Stream removed, broadcasting: 3\nI0519 22:19:25.520784 3824 log.go:172] (0xc000b17290) (0xc000ca6140) Stream removed, broadcasting: 5\n" May 19 22:19:25.525: INFO: stdout: "" May 19 22:19:25.525: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:25.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7563" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.712 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":262,"skipped":4272,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:25.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 19 22:19:25.602: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:33.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8821" for this suite. • [SLOW TEST:8.088 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":263,"skipped":4274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:33.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 19 22:19:33.850: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7" in namespace "projected-6236" to be "success or failure" May 19 22:19:33.854: INFO: Pod "downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.573802ms May 19 22:19:35.858: INFO: Pod "downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007359928s May 19 22:19:37.865: INFO: Pod "downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014619252s STEP: Saw pod success May 19 22:19:37.865: INFO: Pod "downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7" satisfied condition "success or failure" May 19 22:19:37.869: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7 container client-container: STEP: delete the pod May 19 22:19:37.942: INFO: Waiting for pod downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7 to disappear May 19 22:19:37.972: INFO: Pod downwardapi-volume-59b4242c-a4f8-4afb-86e7-d0036616e1f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:37.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6236" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4301,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:37.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-db563aad-93dd-48b1-903a-3aa5c43be95d STEP: Creating a pod to test consume secrets May 19 22:19:38.065: INFO: Waiting up to 5m0s for pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66" in namespace "secrets-5613" to be "success or failure" May 19 22:19:38.070: INFO: Pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157747ms May 19 22:19:40.073: INFO: Pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007872338s May 19 22:19:42.077: INFO: Pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66": Phase="Running", Reason="", readiness=true. Elapsed: 4.011315684s May 19 22:19:44.081: INFO: Pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01565337s STEP: Saw pod success May 19 22:19:44.081: INFO: Pod "pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66" satisfied condition "success or failure" May 19 22:19:44.084: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66 container secret-volume-test: STEP: delete the pod May 19 22:19:44.129: INFO: Waiting for pod pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66 to disappear May 19 22:19:44.132: INFO: Pod pod-secrets-361fd2a3-6d87-4b05-a97b-051547d78b66 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:19:44.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5613" for this suite. • [SLOW TEST:6.162 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4303,"failed":0} [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:19:44.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-96d7646d-2ba1-439c-8385-a59c86486acd in namespace container-probe-7449 May 19 22:19:48.289: INFO: Started pod liveness-96d7646d-2ba1-439c-8385-a59c86486acd in namespace container-probe-7449 STEP: checking the pod's current state and verifying that restartCount is present May 19 22:19:48.292: INFO: Initial restart count of pod liveness-96d7646d-2ba1-439c-8385-a59c86486acd is 0 May 19 22:20:10.345: INFO: Restart count of pod container-probe-7449/liveness-96d7646d-2ba1-439c-8385-a59c86486acd is now 1 (22.053686535s elapsed) May 19 22:20:30.390: INFO: Restart count of pod container-probe-7449/liveness-96d7646d-2ba1-439c-8385-a59c86486acd is now 2 (42.098044165s elapsed) May 19 22:20:50.442: INFO: Restart count of pod container-probe-7449/liveness-96d7646d-2ba1-439c-8385-a59c86486acd is now 3 (1m2.150901564s elapsed) May 19 22:21:10.499: INFO: Restart count of pod container-probe-7449/liveness-96d7646d-2ba1-439c-8385-a59c86486acd is now 4 (1m22.207122959s elapsed) May 19 22:22:18.667: INFO: Restart count of pod container-probe-7449/liveness-96d7646d-2ba1-439c-8385-a59c86486acd is now 5 (2m30.375361309s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:22:18.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7449" for this suite. • [SLOW TEST:154.569 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4303,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:22:18.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 19 22:22:18.769: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix114104520/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:22:18.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5967" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":267,"skipped":4304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:22:18.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:22:19.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 19 22:22:19.773: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:19Z generation:1 name:name1 resourceVersion:17549261 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c2fdc810-1e97-4ea5-a2bb-25a1be62a3cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 19 22:22:29.779: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:29Z generation:1 name:name2 resourceVersion:17549306 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:959df800-e793-4bb1-b153-f68cae4428d8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 19 22:22:39.786: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:19Z generation:2 name:name1 resourceVersion:17549335 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c2fdc810-1e97-4ea5-a2bb-25a1be62a3cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 19 22:22:49.792: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:29Z generation:2 name:name2 resourceVersion:17549365 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:959df800-e793-4bb1-b153-f68cae4428d8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 19 22:22:59.800: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:19Z generation:2 name:name1 resourceVersion:17549395 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c2fdc810-1e97-4ea5-a2bb-25a1be62a3cd] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 19 22:23:09.808: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T22:22:29Z generation:2 name:name2 resourceVersion:17549425 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:959df800-e793-4bb1-b153-f68cae4428d8] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:23:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2625" for this suite. • [SLOW TEST:61.343 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":268,"skipped":4327,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:23:20.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-2b0b6f55-a8fa-49ce-9f24-4ae4e4a2bbf5 STEP: Creating a pod to test consume secrets May 19 22:23:20.476: INFO: Waiting up to 5m0s for pod "pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf" in namespace "secrets-1657" to be "success or failure" May 19 22:23:20.486: INFO: Pod "pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.560291ms May 19 22:23:22.546: INFO: Pod "pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069521082s May 19 22:23:24.549: INFO: Pod "pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07305939s STEP: Saw pod success May 19 22:23:24.549: INFO: Pod "pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf" satisfied condition "success or failure" May 19 22:23:24.552: INFO: Trying to get logs from node jerma-worker pod pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf container secret-env-test: STEP: delete the pod May 19 22:23:24.577: INFO: Waiting for pod pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf to disappear May 19 22:23:24.582: INFO: Pod pod-secrets-358a1de0-2cee-4307-998e-ccbee2d4eebf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:23:24.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1657" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4337,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:23:24.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-2940ea3a-42d9-429f-90f5-efa72c840839 STEP: Creating secret with name s-test-opt-upd-902e6b83-fde3-4bbb-a36c-27bc4c72af93 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2940ea3a-42d9-429f-90f5-efa72c840839 STEP: Updating secret s-test-opt-upd-902e6b83-fde3-4bbb-a36c-27bc4c72af93 STEP: Creating secret with name s-test-opt-create-4f29d79d-308b-499c-b056-8f571d3384ed STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:23:32.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7055" for this suite. • [SLOW TEST:8.259 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:23:32.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b4b9cad8-56fa-4139-924f-847c6db0a79f STEP: Creating secret with name s-test-opt-upd-53b9c31c-ca5a-4f15-85c2-7c49578cf939 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b4b9cad8-56fa-4139-924f-847c6db0a79f STEP: Updating secret s-test-opt-upd-53b9c31c-ca5a-4f15-85c2-7c49578cf939 STEP: Creating secret with name s-test-opt-create-a8562647-9fd5-419d-89d5-be06c04c658f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:23:41.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5939" for this suite. • [SLOW TEST:8.390 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4373,"failed":0} [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:23:41.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:23:41.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6525" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":272,"skipped":4373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:23:41.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5605 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 22:23:41.374: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 22:24:05.609: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.123 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5605 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 22:24:05.609: INFO: >>> kubeConfig: /root/.kube/config I0519 22:24:05.655404 6 log.go:172] (0xc001760dc0) (0xc0002692c0) Create stream I0519 22:24:05.655438 6 log.go:172] (0xc001760dc0) (0xc0002692c0) Stream added, broadcasting: 1 I0519 22:24:05.657485 6 log.go:172] (0xc001760dc0) Reply frame received for 1 I0519 22:24:05.657511 6 log.go:172] (0xc001760dc0) (0xc000567d60) Create stream I0519 22:24:05.657524 6 log.go:172] (0xc001760dc0) (0xc000567d60) Stream added, broadcasting: 3 I0519 22:24:05.658151 6 log.go:172] (0xc001760dc0) Reply frame received for 3 I0519 22:24:05.658186 6 log.go:172] (0xc001760dc0) (0xc0002699a0) Create stream I0519 22:24:05.658199 6 log.go:172] (0xc001760dc0) (0xc0002699a0) Stream added, broadcasting: 5 I0519 22:24:05.658971 6 log.go:172] (0xc001760dc0) Reply frame received for 5 I0519 22:24:06.725825 6 log.go:172] (0xc001760dc0) Data frame received for 3 I0519 22:24:06.725913 6 log.go:172] (0xc000567d60) (3) Data frame handling I0519 22:24:06.725963 6 log.go:172] (0xc000567d60) (3) Data frame sent I0519 22:24:06.726072 6 log.go:172] (0xc001760dc0) Data frame received for 3 I0519 22:24:06.726113 6 log.go:172] (0xc000567d60) (3) Data frame handling I0519 22:24:06.726255 6 log.go:172] (0xc001760dc0) Data frame received for 5 I0519 22:24:06.726276 6 log.go:172] (0xc0002699a0) (5) Data frame handling I0519 22:24:06.728846 6 log.go:172] (0xc001760dc0) Data frame received for 1 I0519 22:24:06.728879 6 log.go:172] (0xc0002692c0) (1) Data frame handling I0519 22:24:06.728903 6 log.go:172] (0xc0002692c0) (1) Data frame sent I0519 22:24:06.728923 6 log.go:172] (0xc001760dc0) (0xc0002692c0) Stream removed, broadcasting: 1 I0519 22:24:06.728945 6 log.go:172] (0xc001760dc0) Go away received I0519 22:24:06.729080 6 log.go:172] (0xc001760dc0) (0xc0002692c0) Stream removed, broadcasting: 1 I0519 22:24:06.729279 6 log.go:172] (0xc001760dc0) (0xc000567d60) Stream removed, broadcasting: 3 I0519 22:24:06.729302 6 log.go:172] (0xc001760dc0) (0xc0002699a0) Stream removed, broadcasting: 5 May 19 22:24:06.729: INFO: Found all expected endpoints: [netserver-0] May 19 22:24:06.732: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.153 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5605 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 22:24:06.733: INFO: >>> kubeConfig: /root/.kube/config I0519 22:24:06.766104 6 log.go:172] (0xc001949600) (0xc000d59f40) Create stream I0519 22:24:06.766134 6 log.go:172] (0xc001949600) (0xc000d59f40) Stream added, broadcasting: 1 I0519 22:24:06.768261 6 log.go:172] (0xc001949600) Reply frame received for 1 I0519 22:24:06.768298 6 log.go:172] (0xc001949600) (0xc0027aa0a0) Create stream I0519 22:24:06.768310 6 log.go:172] (0xc001949600) (0xc0027aa0a0) Stream added, broadcasting: 3 I0519 22:24:06.769420 6 log.go:172] (0xc001949600) Reply frame received for 3 I0519 22:24:06.769457 6 log.go:172] (0xc001949600) (0xc001a2e0a0) Create stream I0519 22:24:06.769471 6 log.go:172] (0xc001949600) (0xc001a2e0a0) Stream added, broadcasting: 5 I0519 22:24:06.770494 6 log.go:172] (0xc001949600) Reply frame received for 5 I0519 22:24:07.858205 6 log.go:172] (0xc001949600) Data frame received for 5 I0519 22:24:07.858234 6 log.go:172] (0xc001a2e0a0) (5) Data frame handling I0519 22:24:07.858304 6 log.go:172] (0xc001949600) Data frame received for 3 I0519 22:24:07.858338 6 log.go:172] (0xc0027aa0a0) (3) Data frame handling I0519 22:24:07.858361 6 log.go:172] (0xc0027aa0a0) (3) Data frame sent I0519 22:24:07.858380 6 log.go:172] (0xc001949600) Data frame received for 3 I0519 22:24:07.858397 6 log.go:172] (0xc0027aa0a0) (3) Data frame handling I0519 22:24:07.859863 6 log.go:172] (0xc001949600) Data frame received for 1 I0519 22:24:07.859892 6 log.go:172] (0xc000d59f40) (1) Data frame handling I0519 22:24:07.859911 6 log.go:172] (0xc000d59f40) (1) Data frame sent I0519 22:24:07.859946 6 log.go:172] (0xc001949600) (0xc000d59f40) Stream removed, broadcasting: 1 I0519 22:24:07.860076 6 log.go:172] (0xc001949600) (0xc000d59f40) Stream removed, broadcasting: 1 I0519 22:24:07.860094 6 log.go:172] (0xc001949600) (0xc0027aa0a0) Stream removed, broadcasting: 3 I0519 22:24:07.860148 6 log.go:172] (0xc001949600) Go away received I0519 22:24:07.860260 6 log.go:172] (0xc001949600) (0xc001a2e0a0) Stream removed, broadcasting: 5 May 19 22:24:07.860: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:24:07.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5605" for this suite. • [SLOW TEST:26.526 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4408,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:24:07.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9035 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 22:24:07.937: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 19 22:24:30.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.156:8080/dial?request=hostname&protocol=udp&host=10.244.1.124&port=8081&tries=1'] Namespace:pod-network-test-9035 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 22:24:30.119: INFO: >>> kubeConfig: /root/.kube/config I0519 22:24:30.154113 6 log.go:172] (0xc002660000) (0xc0023b4be0) Create stream I0519 22:24:30.154141 6 log.go:172] (0xc002660000) (0xc0023b4be0) Stream added, broadcasting: 1 I0519 22:24:30.155952 6 log.go:172] (0xc002660000) Reply frame received for 1 I0519 22:24:30.156032 6 log.go:172] (0xc002660000) (0xc002942640) Create stream I0519 22:24:30.156059 6 log.go:172] (0xc002660000) (0xc002942640) Stream added, broadcasting: 3 I0519 22:24:30.157007 6 log.go:172] (0xc002660000) Reply frame received for 3 I0519 22:24:30.157028 6 log.go:172] (0xc002660000) (0xc0023b4d20) Create stream I0519 22:24:30.157038 6 log.go:172] (0xc002660000) (0xc0023b4d20) Stream added, broadcasting: 5 I0519 22:24:30.158019 6 log.go:172] (0xc002660000) Reply frame received for 5 I0519 22:24:30.228509 6 log.go:172] (0xc002660000) Data frame received for 3 I0519 22:24:30.228549 6 log.go:172] (0xc002942640) (3) Data frame handling I0519 22:24:30.228574 6 log.go:172] (0xc002942640) (3) Data frame sent I0519 22:24:30.229505 6 log.go:172] (0xc002660000) Data frame received for 3 I0519 22:24:30.229531 6 log.go:172] (0xc002942640) (3) Data frame handling I0519 22:24:30.229587 6 log.go:172] (0xc002660000) Data frame received for 5 I0519 22:24:30.229615 6 log.go:172] (0xc0023b4d20) (5) Data frame handling I0519 22:24:30.231341 6 log.go:172] (0xc002660000) Data frame received for 1 I0519 22:24:30.231363 6 log.go:172] (0xc0023b4be0) (1) Data frame handling I0519 22:24:30.231376 6 log.go:172] (0xc0023b4be0) (1) Data frame sent I0519 22:24:30.231384 6 log.go:172] (0xc002660000) (0xc0023b4be0) Stream removed, broadcasting: 1 I0519 22:24:30.231416 6 log.go:172] (0xc002660000) Go away received I0519 22:24:30.231456 6 log.go:172] (0xc002660000) (0xc0023b4be0) Stream removed, broadcasting: 1 I0519 22:24:30.231466 6 log.go:172] (0xc002660000) (0xc002942640) Stream removed, broadcasting: 3 I0519 22:24:30.231471 6 log.go:172] (0xc002660000) (0xc0023b4d20) Stream removed, broadcasting: 5 May 19 22:24:30.231: INFO: Waiting for responses: map[] May 19 22:24:30.234: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.156:8080/dial?request=hostname&protocol=udp&host=10.244.2.155&port=8081&tries=1'] Namespace:pod-network-test-9035 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 22:24:30.234: INFO: >>> kubeConfig: /root/.kube/config I0519 22:24:30.261851 6 log.go:172] (0xc002971d90) (0xc002942a00) Create stream I0519 22:24:30.261885 6 log.go:172] (0xc002971d90) (0xc002942a00) Stream added, broadcasting: 1 I0519 22:24:30.263370 6 log.go:172] (0xc002971d90) Reply frame received for 1 I0519 22:24:30.263416 6 log.go:172] (0xc002971d90) (0xc001d761e0) Create stream I0519 22:24:30.263438 6 log.go:172] (0xc002971d90) (0xc001d761e0) Stream added, broadcasting: 3 I0519 22:24:30.264060 6 log.go:172] (0xc002971d90) Reply frame received for 3 I0519 22:24:30.264081 6 log.go:172] (0xc002971d90) (0xc00170a280) Create stream I0519 22:24:30.264097 6 log.go:172] (0xc002971d90) (0xc00170a280) Stream added, broadcasting: 5 I0519 22:24:30.264818 6 log.go:172] (0xc002971d90) Reply frame received for 5 I0519 22:24:30.346263 6 log.go:172] (0xc002971d90) Data frame received for 3 I0519 22:24:30.346292 6 log.go:172] (0xc001d761e0) (3) Data frame handling I0519 22:24:30.346308 6 log.go:172] (0xc001d761e0) (3) Data frame sent I0519 22:24:30.346560 6 log.go:172] (0xc002971d90) Data frame received for 3 I0519 22:24:30.346582 6 log.go:172] (0xc001d761e0) (3) Data frame handling I0519 22:24:30.346618 6 log.go:172] (0xc002971d90) Data frame received for 5 I0519 22:24:30.346639 6 log.go:172] (0xc00170a280) (5) Data frame handling I0519 22:24:30.348686 6 log.go:172] (0xc002971d90) Data frame received for 1 I0519 22:24:30.348706 6 log.go:172] (0xc002942a00) (1) Data frame handling I0519 22:24:30.348721 6 log.go:172] (0xc002942a00) (1) Data frame sent I0519 22:24:30.348735 6 log.go:172] (0xc002971d90) (0xc002942a00) Stream removed, broadcasting: 1 I0519 22:24:30.348752 6 log.go:172] (0xc002971d90) Go away received I0519 22:24:30.348885 6 log.go:172] (0xc002971d90) (0xc002942a00) Stream removed, broadcasting: 1 I0519 22:24:30.348913 6 log.go:172] (0xc002971d90) (0xc001d761e0) Stream removed, broadcasting: 3 I0519 22:24:30.348927 6 log.go:172] (0xc002971d90) (0xc00170a280) Stream removed, broadcasting: 5 May 19 22:24:30.348: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:24:30.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9035" for this suite. • [SLOW TEST:22.488 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:24:30.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 22:24:30.955: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 22:24:33.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523870, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523870, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523871, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725523870, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 22:24:36.061: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:24:36.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:24:37.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2762" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.527 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":275,"skipped":4431,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:24:37.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 19 22:24:37.932: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 22:24:40.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1729 create -f -' May 19 22:24:44.140: INFO: stderr: "" May 19 22:24:44.140: INFO: stdout: "e2e-test-crd-publish-openapi-5569-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 19 22:24:44.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1729 delete e2e-test-crd-publish-openapi-5569-crds test-cr' May 19 22:24:44.259: INFO: stderr: "" May 19 22:24:44.259: INFO: stdout: "e2e-test-crd-publish-openapi-5569-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 19 22:24:44.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1729 apply -f -' May 19 22:24:44.524: INFO: stderr: "" May 19 22:24:44.524: INFO: stdout: "e2e-test-crd-publish-openapi-5569-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 19 22:24:44.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1729 delete e2e-test-crd-publish-openapi-5569-crds test-cr' May 19 22:24:44.643: INFO: stderr: "" May 19 22:24:44.643: INFO: stdout: "e2e-test-crd-publish-openapi-5569-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 19 22:24:44.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5569-crds' May 19 22:24:44.875: INFO: stderr: "" May 19 22:24:44.875: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5569-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:24:47.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1729" for this suite. • [SLOW TEST:9.892 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":276,"skipped":4452,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:24:47.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 19 22:24:54.351: INFO: Successfully updated pod "adopt-release-vktkx" STEP: Checking that the Job readopts the Pod May 19 22:24:54.351: INFO: Waiting up to 15m0s for pod "adopt-release-vktkx" in namespace "job-5907" to be "adopted" May 19 22:24:54.379: INFO: Pod "adopt-release-vktkx": Phase="Running", Reason="", readiness=true. Elapsed: 27.194412ms May 19 22:24:56.383: INFO: Pod "adopt-release-vktkx": Phase="Running", Reason="", readiness=true. Elapsed: 2.031700861s May 19 22:24:56.383: INFO: Pod "adopt-release-vktkx" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 19 22:24:56.892: INFO: Successfully updated pod "adopt-release-vktkx" STEP: Checking that the Job releases the Pod May 19 22:24:56.892: INFO: Waiting up to 15m0s for pod "adopt-release-vktkx" in namespace "job-5907" to be "released" May 19 22:24:56.902: INFO: Pod "adopt-release-vktkx": Phase="Running", Reason="", readiness=true. Elapsed: 9.530682ms May 19 22:24:58.906: INFO: Pod "adopt-release-vktkx": Phase="Running", Reason="", readiness=true. Elapsed: 2.013754284s May 19 22:24:58.906: INFO: Pod "adopt-release-vktkx" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:24:58.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5907" for this suite. • [SLOW TEST:11.140 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":277,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 19 22:24:58.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-1510 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1510 to expose endpoints map[] May 19 22:24:59.231: INFO: Get endpoints failed (22.370966ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 19 22:25:00.235: INFO: successfully validated that service endpoint-test2 in namespace services-1510 exposes endpoints map[] (1.026312358s elapsed) STEP: Creating pod pod1 in namespace services-1510 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1510 to expose endpoints map[pod1:[80]] May 19 22:25:04.285: INFO: successfully validated that service endpoint-test2 in namespace services-1510 exposes endpoints map[pod1:[80]] (4.042241697s elapsed) STEP: Creating pod pod2 in namespace services-1510 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1510 to expose endpoints map[pod1:[80] pod2:[80]] May 19 22:25:08.389: INFO: successfully validated that service endpoint-test2 in namespace services-1510 exposes endpoints map[pod1:[80] pod2:[80]] (4.100735803s elapsed) STEP: Deleting pod pod1 in namespace services-1510 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1510 to expose endpoints map[pod2:[80]] May 19 22:25:09.492: INFO: successfully validated that service endpoint-test2 in namespace services-1510 exposes endpoints map[pod2:[80]] (1.09873272s elapsed) STEP: Deleting pod pod2 in namespace services-1510 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1510 to expose endpoints map[] May 19 22:25:10.541: INFO: successfully validated that service endpoint-test2 in namespace services-1510 exposes endpoints map[] (1.042883786s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 19 22:25:10.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1510" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.749 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":278,"skipped":4520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 19 22:25:10.667: INFO: Running AfterSuite actions on all nodes May 19 22:25:10.667: INFO: Running AfterSuite actions on node 1 May 19 22:25:10.667: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4370.756 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS