I0518 23:38:16.985501 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0518 23:38:16.985777 7 e2e.go:129] Starting e2e run "f8ad90c3-e60f-41ce-b39c-8d0cd27f60aa" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589845095 - Will randomize all specs Will run 288 of 5095 specs May 18 23:38:17.055: INFO: >>> kubeConfig: /root/.kube/config May 18 23:38:17.057: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 18 23:38:17.081: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 18 23:38:17.122: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 18 23:38:17.122: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 18 23:38:17.122: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 18 23:38:17.131: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 18 23:38:17.131: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 18 23:38:17.131: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 18 23:38:17.132: INFO: kube-apiserver version: v1.18.2 May 18 23:38:17.132: INFO: >>> kubeConfig: /root/.kube/config May 18 23:38:17.138: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:17.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 18 23:38:17.250: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 18 23:38:17.253: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 18 23:38:17.261: INFO: Waiting for terminating namespaces to be deleted... May 18 23:38:17.263: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 18 23:38:17.268: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 18 23:38:17.268: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 18 23:38:17.268: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 18 23:38:17.268: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 18 23:38:17.268: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 18 23:38:17.268: INFO: Container kindnet-cni ready: true, restart count 0 May 18 23:38:17.268: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 18 23:38:17.268: INFO: Container kube-proxy ready: true, restart count 0 May 18 23:38:17.268: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 18 23:38:17.272: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 18 23:38:17.272: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 18 23:38:17.272: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 18 23:38:17.272: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 18 23:38:17.272: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 18 23:38:17.272: INFO: Container kindnet-cni ready: true, restart count 0 May 18 23:38:17.272: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 18 23:38:17.272: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 18 23:38:17.380: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 18 23:38:17.380: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 18 23:38:17.380: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 18 23:38:17.380: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 18 23:38:17.380: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 18 23:38:17.380: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 18 23:38:17.380: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 18 23:38:17.386: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-24619edb-6794-471d-932f-e21a2cd6b975.161043b15abe287f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7991/filler-pod-24619edb-6794-471d-932f-e21a2cd6b975 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-24619edb-6794-471d-932f-e21a2cd6b975.161043b1beb9f3ad], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-24619edb-6794-471d-932f-e21a2cd6b975.161043b22a9685e9], Reason = [Created], Message = [Created container filler-pod-24619edb-6794-471d-932f-e21a2cd6b975] STEP: Considering event: Type = [Normal], Name = [filler-pod-24619edb-6794-471d-932f-e21a2cd6b975.161043b23c7b764a], Reason = [Started], Message = [Started container filler-pod-24619edb-6794-471d-932f-e21a2cd6b975] STEP: Considering event: Type = [Normal], Name = [filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7.161043b156d2c28b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7991/filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7.161043b1ab618c0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7.161043b21f81a1bb], Reason = [Created], Message = [Created container filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7] STEP: Considering event: Type = [Normal], Name = [filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7.161043b233e1ce59], Reason = [Started], Message = [Started container filler-pod-446cbc9c-1d47-407e-83e7-9692a893c5c7] STEP: Considering event: Type = [Warning], Name = [additional-pod.161043b2c1b68d5b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.161043b2c501e50d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:38:24.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7991" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.469 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:24.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 18 23:38:24.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc" in namespace "projected-2696" to be "Succeeded or Failed" May 18 23:38:24.735: INFO: Pod "downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.758927ms May 18 23:38:26.739: INFO: Pod "downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025049668s May 18 23:38:28.744: INFO: Pod "downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030268815s STEP: Saw pod success May 18 23:38:28.744: INFO: Pod "downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc" satisfied condition "Succeeded or Failed" May 18 23:38:28.747: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc container client-container: STEP: delete the pod May 18 23:38:28.805: INFO: Waiting for pod downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc to disappear May 18 23:38:28.813: INFO: Pod downwardapi-volume-46d639a4-b1de-45ac-abc6-4ba2fbb106cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:38:28.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2696" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:28.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 18 23:38:28.897: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:38:37.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4073" for this suite. • [SLOW TEST:8.454 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":3,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:37.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-14bb0a78-5714-42a4-a62c-30a6b1e8edce STEP: Creating a pod to test consume configMaps May 18 23:38:37.401: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43" in namespace "configmap-480" to be "Succeeded or Failed" May 18 23:38:37.405: INFO: Pod "pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136957ms May 18 23:38:39.413: INFO: Pod "pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011913905s May 18 23:38:41.417: INFO: Pod "pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016335385s STEP: Saw pod success May 18 23:38:41.417: INFO: Pod "pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43" satisfied condition "Succeeded or Failed" May 18 23:38:41.421: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43 container configmap-volume-test: STEP: delete the pod May 18 23:38:41.711: INFO: Waiting for pod pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43 to disappear May 18 23:38:41.723: INFO: Pod pod-configmaps-4b78c9f0-0cbd-4ee7-82d6-3015956d8d43 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:38:41.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-480" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":181,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:41.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-074b4663-d337-4422-8e0d-2e4d1e800ab2 STEP: Creating a pod to test consume configMaps May 18 23:38:41.804: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68" in namespace "configmap-2612" to be "Succeeded or Failed" May 18 23:38:41.877: INFO: Pod "pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68": Phase="Pending", Reason="", readiness=false. Elapsed: 72.391989ms May 18 23:38:43.881: INFO: Pod "pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076850991s May 18 23:38:45.886: INFO: Pod "pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081597248s STEP: Saw pod success May 18 23:38:45.886: INFO: Pod "pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68" satisfied condition "Succeeded or Failed" May 18 23:38:45.889: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68 container configmap-volume-test: STEP: delete the pod May 18 23:38:45.946: INFO: Waiting for pod pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68 to disappear May 18 23:38:45.964: INFO: Pod pod-configmaps-a4d2710a-9c06-4476-9a74-14a5bd580b68 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:38:45.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2612" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":187,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:38:45.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 18 23:40:46.082: INFO: Deleting pod "var-expansion-582d67cd-fd58-4e2d-ac96-b19a869fb998" in namespace "var-expansion-7586" May 18 23:40:46.086: INFO: Wait up to 5m0s for pod "var-expansion-582d67cd-fd58-4e2d-ac96-b19a869fb998" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:40:48.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7586" for this suite. • [SLOW TEST:122.158 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":6,"skipped":201,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:40:48.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-3093ba7d-2e82-40cd-82c0-605e387ac0eb STEP: Creating a pod to test consume configMaps May 18 23:40:48.215: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b" in namespace "projected-3395" to be "Succeeded or Failed" May 18 23:40:48.234: INFO: Pod "pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.009331ms May 18 23:40:50.239: INFO: Pod "pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02379589s May 18 23:40:52.260: INFO: Pod "pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045618814s STEP: Saw pod success May 18 23:40:52.260: INFO: Pod "pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b" satisfied condition "Succeeded or Failed" May 18 23:40:52.264: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b container projected-configmap-volume-test: STEP: delete the pod May 18 23:40:52.303: INFO: Waiting for pod pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b to disappear May 18 23:40:52.314: INFO: Pod pod-projected-configmaps-7efc1d17-e690-420a-b07f-73b03163088b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:40:52.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3395" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":216,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:40:52.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 18 23:41:00.446: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:00.481: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:02.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:02.485: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:04.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:04.486: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:06.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:06.486: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:08.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:08.487: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:10.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:10.486: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:12.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:12.486: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:14.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:14.486: INFO: Pod pod-with-poststart-http-hook still exists May 18 23:41:16.481: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 18 23:41:16.486: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:16.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4790" for this suite. • [SLOW TEST:24.175 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":8,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:16.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 18 23:41:16.569: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:17.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4231" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":9,"skipped":242,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:17.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 18 23:41:17.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8" in namespace "downward-api-6113" to be "Succeeded or Failed" May 18 23:41:17.318: INFO: Pod "downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.317312ms May 18 23:41:19.399: INFO: Pod "downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099136254s May 18 23:41:21.403: INFO: Pod "downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103723228s STEP: Saw pod success May 18 23:41:21.404: INFO: Pod "downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8" satisfied condition "Succeeded or Failed" May 18 23:41:21.407: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8 container client-container: STEP: delete the pod May 18 23:41:21.486: INFO: Waiting for pod downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8 to disappear May 18 23:41:21.526: INFO: Pod downwardapi-volume-48764c40-2a54-4ce3-b044-192c94d157c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:21.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6113" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":254,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:21.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2513.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2513.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2513.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2513.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2513.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2513.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 18 23:41:28.121: INFO: DNS probes using dns-2513/dns-test-4b1fae96-c105-4d0c-b671-edf8a01004e5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2513" for this suite. • [SLOW TEST:6.617 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":11,"skipped":266,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:28.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-6675 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6675 to expose endpoints map[] May 18 23:41:28.839: INFO: Get endpoints failed (3.643388ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 18 23:41:29.843: INFO: successfully validated that service multi-endpoint-test in namespace services-6675 exposes endpoints map[] (1.007487866s elapsed) STEP: Creating pod pod1 in namespace services-6675 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6675 to expose endpoints map[pod1:[100]] May 18 23:41:34.411: INFO: successfully validated that service multi-endpoint-test in namespace services-6675 exposes endpoints map[pod1:[100]] (4.562391014s elapsed) STEP: Creating pod pod2 in namespace services-6675 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6675 to expose endpoints map[pod1:[100] pod2:[101]] May 18 23:41:38.552: INFO: successfully validated that service multi-endpoint-test in namespace services-6675 exposes endpoints map[pod1:[100] pod2:[101]] (4.135563842s elapsed) STEP: Deleting pod pod1 in namespace services-6675 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6675 to expose endpoints map[pod2:[101]] May 18 23:41:39.612: INFO: successfully validated that service multi-endpoint-test in namespace services-6675 exposes endpoints map[pod2:[101]] (1.056164286s elapsed) STEP: Deleting pod pod2 in namespace services-6675 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6675 to expose endpoints map[] May 18 23:41:40.733: INFO: successfully validated that service multi-endpoint-test in namespace services-6675 exposes endpoints map[] (1.117060218s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6675" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.570 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":12,"skipped":281,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:40.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 18 23:41:40.916: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb" in namespace "downward-api-9656" to be "Succeeded or Failed" May 18 23:41:40.937: INFO: Pod "downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.172593ms May 18 23:41:42.955: INFO: Pod "downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038930265s May 18 23:41:45.027: INFO: Pod "downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111055854s STEP: Saw pod success May 18 23:41:45.028: INFO: Pod "downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb" satisfied condition "Succeeded or Failed" May 18 23:41:45.031: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb container client-container: STEP: delete the pod May 18 23:41:45.191: INFO: Waiting for pod downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb to disappear May 18 23:41:45.196: INFO: Pod downwardapi-volume-a3e6502b-dd3b-4eb9-9e6d-1546318ccbcb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:45.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9656" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":292,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:45.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:41:50.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2117" for this suite. • [SLOW TEST:5.161 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":14,"skipped":295,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:41:50.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-7510 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7510 STEP: Deleting pre-stop pod May 18 23:42:03.522: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:42:03.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7510" for this suite. • [SLOW TEST:13.187 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":15,"skipped":306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:42:03.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b STEP: updating the pod May 18 23:42:12.164: INFO: Successfully updated pod "var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b" STEP: waiting for pod and container restart STEP: Failing liveness probe May 18 23:42:12.183: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-9749 PodName:var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:42:12.183: INFO: >>> kubeConfig: /root/.kube/config I0518 23:42:12.255041 7 log.go:172] (0xc003446840) (0xc001b945a0) Create stream I0518 23:42:12.255082 7 log.go:172] (0xc003446840) (0xc001b945a0) Stream added, broadcasting: 1 I0518 23:42:12.258584 7 log.go:172] (0xc003446840) Reply frame received for 1 I0518 23:42:12.258652 7 log.go:172] (0xc003446840) (0xc002037720) Create stream I0518 23:42:12.258683 7 log.go:172] (0xc003446840) (0xc002037720) Stream added, broadcasting: 3 I0518 23:42:12.259926 7 log.go:172] (0xc003446840) Reply frame received for 3 I0518 23:42:12.259986 7 log.go:172] (0xc003446840) (0xc001fb1040) Create stream I0518 23:42:12.260012 7 log.go:172] (0xc003446840) (0xc001fb1040) Stream added, broadcasting: 5 I0518 23:42:12.261007 7 log.go:172] (0xc003446840) Reply frame received for 5 I0518 23:42:12.338492 7 log.go:172] (0xc003446840) Data frame received for 5 I0518 23:42:12.338586 7 log.go:172] (0xc001fb1040) (5) Data frame handling I0518 23:42:12.338673 7 log.go:172] (0xc003446840) Data frame received for 3 I0518 23:42:12.338728 7 log.go:172] (0xc002037720) (3) Data frame handling I0518 23:42:12.340168 7 log.go:172] (0xc003446840) Data frame received for 1 I0518 23:42:12.340182 7 log.go:172] (0xc001b945a0) (1) Data frame handling I0518 23:42:12.340191 7 log.go:172] (0xc001b945a0) (1) Data frame sent I0518 23:42:12.340357 7 log.go:172] (0xc003446840) (0xc001b945a0) Stream removed, broadcasting: 1 I0518 23:42:12.340405 7 log.go:172] (0xc003446840) Go away received I0518 23:42:12.340784 7 log.go:172] (0xc003446840) (0xc001b945a0) Stream removed, broadcasting: 1 I0518 23:42:12.340807 7 log.go:172] (0xc003446840) (0xc002037720) Stream removed, broadcasting: 3 I0518 23:42:12.340822 7 log.go:172] (0xc003446840) (0xc001fb1040) Stream removed, broadcasting: 5 May 18 23:42:12.340: INFO: Pod exec output: / STEP: Waiting for container to restart May 18 23:42:12.351: INFO: Container dapi-container, restarts: 0 May 18 23:42:22.356: INFO: Container dapi-container, restarts: 0 May 18 23:42:32.355: INFO: Container dapi-container, restarts: 0 May 18 23:42:42.356: INFO: Container dapi-container, restarts: 0 May 18 23:42:52.356: INFO: Container dapi-container, restarts: 1 May 18 23:42:52.356: INFO: Container has restart count: 1 STEP: Rewriting the file May 18 23:42:52.356: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-9749 PodName:var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:42:52.356: INFO: >>> kubeConfig: /root/.kube/config I0518 23:42:52.392231 7 log.go:172] (0xc002c51340) (0xc002b06500) Create stream I0518 23:42:52.392275 7 log.go:172] (0xc002c51340) (0xc002b06500) Stream added, broadcasting: 1 I0518 23:42:52.394925 7 log.go:172] (0xc002c51340) Reply frame received for 1 I0518 23:42:52.394990 7 log.go:172] (0xc002c51340) (0xc002b065a0) Create stream I0518 23:42:52.395013 7 log.go:172] (0xc002c51340) (0xc002b065a0) Stream added, broadcasting: 3 I0518 23:42:52.395929 7 log.go:172] (0xc002c51340) Reply frame received for 3 I0518 23:42:52.395965 7 log.go:172] (0xc002c51340) (0xc002b06640) Create stream I0518 23:42:52.395976 7 log.go:172] (0xc002c51340) (0xc002b06640) Stream added, broadcasting: 5 I0518 23:42:52.396816 7 log.go:172] (0xc002c51340) Reply frame received for 5 I0518 23:42:52.490835 7 log.go:172] (0xc002c51340) Data frame received for 3 I0518 23:42:52.490891 7 log.go:172] (0xc002b065a0) (3) Data frame handling I0518 23:42:52.490940 7 log.go:172] (0xc002c51340) Data frame received for 5 I0518 23:42:52.490975 7 log.go:172] (0xc002b06640) (5) Data frame handling I0518 23:42:52.492330 7 log.go:172] (0xc002c51340) Data frame received for 1 I0518 23:42:52.492360 7 log.go:172] (0xc002b06500) (1) Data frame handling I0518 23:42:52.492375 7 log.go:172] (0xc002b06500) (1) Data frame sent I0518 23:42:52.492399 7 log.go:172] (0xc002c51340) (0xc002b06500) Stream removed, broadcasting: 1 I0518 23:42:52.492420 7 log.go:172] (0xc002c51340) Go away received I0518 23:42:52.492561 7 log.go:172] (0xc002c51340) (0xc002b06500) Stream removed, broadcasting: 1 I0518 23:42:52.492639 7 log.go:172] (0xc002c51340) (0xc002b065a0) Stream removed, broadcasting: 3 I0518 23:42:52.492677 7 log.go:172] (0xc002c51340) (0xc002b06640) Stream removed, broadcasting: 5 May 18 23:42:52.492: INFO: Exec stderr: "" May 18 23:42:52.492: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 18 23:43:20.502: INFO: Container has restart count: 2 May 18 23:44:22.502: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 18 23:44:22.505: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-9749 PodName:var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:44:22.506: INFO: >>> kubeConfig: /root/.kube/config I0518 23:44:22.534096 7 log.go:172] (0xc002c11ad0) (0xc001eb9180) Create stream I0518 23:44:22.534123 7 log.go:172] (0xc002c11ad0) (0xc001eb9180) Stream added, broadcasting: 1 I0518 23:44:22.536038 7 log.go:172] (0xc002c11ad0) Reply frame received for 1 I0518 23:44:22.536074 7 log.go:172] (0xc002c11ad0) (0xc002d40d20) Create stream I0518 23:44:22.536086 7 log.go:172] (0xc002c11ad0) (0xc002d40d20) Stream added, broadcasting: 3 I0518 23:44:22.537026 7 log.go:172] (0xc002c11ad0) Reply frame received for 3 I0518 23:44:22.537075 7 log.go:172] (0xc002c11ad0) (0xc002d40e60) Create stream I0518 23:44:22.537090 7 log.go:172] (0xc002c11ad0) (0xc002d40e60) Stream added, broadcasting: 5 I0518 23:44:22.538258 7 log.go:172] (0xc002c11ad0) Reply frame received for 5 I0518 23:44:22.684820 7 log.go:172] (0xc002c11ad0) Data frame received for 5 I0518 23:44:22.684859 7 log.go:172] (0xc002d40e60) (5) Data frame handling I0518 23:44:22.684889 7 log.go:172] (0xc002c11ad0) Data frame received for 3 I0518 23:44:22.684904 7 log.go:172] (0xc002d40d20) (3) Data frame handling I0518 23:44:22.686671 7 log.go:172] (0xc002c11ad0) Data frame received for 1 I0518 23:44:22.686735 7 log.go:172] (0xc001eb9180) (1) Data frame handling I0518 23:44:22.686774 7 log.go:172] (0xc001eb9180) (1) Data frame sent I0518 23:44:22.686794 7 log.go:172] (0xc002c11ad0) (0xc001eb9180) Stream removed, broadcasting: 1 I0518 23:44:22.686822 7 log.go:172] (0xc002c11ad0) Go away received I0518 23:44:22.686933 7 log.go:172] (0xc002c11ad0) (0xc001eb9180) Stream removed, broadcasting: 1 I0518 23:44:22.686949 7 log.go:172] (0xc002c11ad0) (0xc002d40d20) Stream removed, broadcasting: 3 I0518 23:44:22.686955 7 log.go:172] (0xc002c11ad0) (0xc002d40e60) Stream removed, broadcasting: 5 May 18 23:44:22.690: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-9749 PodName:var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:44:22.690: INFO: >>> kubeConfig: /root/.kube/config I0518 23:44:22.718797 7 log.go:172] (0xc0023566e0) (0xc001eb9b80) Create stream I0518 23:44:22.718837 7 log.go:172] (0xc0023566e0) (0xc001eb9b80) Stream added, broadcasting: 1 I0518 23:44:22.721521 7 log.go:172] (0xc0023566e0) Reply frame received for 1 I0518 23:44:22.721560 7 log.go:172] (0xc0023566e0) (0xc002d40f00) Create stream I0518 23:44:22.721574 7 log.go:172] (0xc0023566e0) (0xc002d40f00) Stream added, broadcasting: 3 I0518 23:44:22.722583 7 log.go:172] (0xc0023566e0) Reply frame received for 3 I0518 23:44:22.722616 7 log.go:172] (0xc0023566e0) (0xc002b066e0) Create stream I0518 23:44:22.722630 7 log.go:172] (0xc0023566e0) (0xc002b066e0) Stream added, broadcasting: 5 I0518 23:44:22.723578 7 log.go:172] (0xc0023566e0) Reply frame received for 5 I0518 23:44:22.794611 7 log.go:172] (0xc0023566e0) Data frame received for 5 I0518 23:44:22.794659 7 log.go:172] (0xc0023566e0) Data frame received for 3 I0518 23:44:22.794710 7 log.go:172] (0xc002d40f00) (3) Data frame handling I0518 23:44:22.794755 7 log.go:172] (0xc002b066e0) (5) Data frame handling I0518 23:44:22.795910 7 log.go:172] (0xc0023566e0) Data frame received for 1 I0518 23:44:22.795927 7 log.go:172] (0xc001eb9b80) (1) Data frame handling I0518 23:44:22.795934 7 log.go:172] (0xc001eb9b80) (1) Data frame sent I0518 23:44:22.795949 7 log.go:172] (0xc0023566e0) (0xc001eb9b80) Stream removed, broadcasting: 1 I0518 23:44:22.795977 7 log.go:172] (0xc0023566e0) Go away received I0518 23:44:22.796056 7 log.go:172] (0xc0023566e0) (0xc001eb9b80) Stream removed, broadcasting: 1 I0518 23:44:22.796088 7 log.go:172] (0xc0023566e0) (0xc002d40f00) Stream removed, broadcasting: 3 I0518 23:44:22.796100 7 log.go:172] (0xc0023566e0) (0xc002b066e0) Stream removed, broadcasting: 5 May 18 23:44:22.796: INFO: Deleting pod "var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b" in namespace "var-expansion-9749" May 18 23:44:22.801: INFO: Wait up to 5m0s for pod "var-expansion-fe753d55-2f4d-4bbd-a53a-99787600164b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:04.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9749" for this suite. • [SLOW TEST:181.369 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":16,"skipped":355,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:04.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a062d314-5643-4c57-91a0-95aa2784a1fa STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:11.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2721" for this suite. • [SLOW TEST:6.222 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":366,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:11.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-97cc7005-9c74-475c-9c42-4629592a4a3a STEP: Creating a pod to test consume secrets May 18 23:45:11.247: INFO: Waiting up to 5m0s for pod "pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4" in namespace "secrets-5820" to be "Succeeded or Failed" May 18 23:45:11.264: INFO: Pod "pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.467334ms May 18 23:45:13.268: INFO: Pod "pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020986913s May 18 23:45:15.273: INFO: Pod "pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025034251s STEP: Saw pod success May 18 23:45:15.273: INFO: Pod "pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4" satisfied condition "Succeeded or Failed" May 18 23:45:15.275: INFO: Trying to get logs from node latest-worker pod pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4 container secret-volume-test: STEP: delete the pod May 18 23:45:15.435: INFO: Waiting for pod pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4 to disappear May 18 23:45:15.461: INFO: Pod pod-secrets-7b24e93e-68ed-4d1b-9736-9fbbee95d4d4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:15.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5820" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":382,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:15.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:19.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9132" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":19,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:19.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3678 STEP: creating a selector STEP: Creating the service pods in kubernetes May 18 23:45:19.737: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 18 23:45:19.871: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 18 23:45:22.070: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 18 23:45:23.933: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:25.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:27.895: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:29.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:31.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:33.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:35.876: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:37.875: INFO: The status of Pod netserver-0 is Running (Ready = false) May 18 23:45:39.876: INFO: The status of Pod netserver-0 is Running (Ready = true) May 18 23:45:39.882: INFO: The status of Pod netserver-1 is Running (Ready = false) May 18 23:45:41.888: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 18 23:45:45.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.65:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:45:45.998: INFO: >>> kubeConfig: /root/.kube/config I0518 23:45:46.078786 7 log.go:172] (0xc002bee2c0) (0xc002c468c0) Create stream I0518 23:45:46.078813 7 log.go:172] (0xc002bee2c0) (0xc002c468c0) Stream added, broadcasting: 1 I0518 23:45:46.080889 7 log.go:172] (0xc002bee2c0) Reply frame received for 1 I0518 23:45:46.080926 7 log.go:172] (0xc002bee2c0) (0xc002c46960) Create stream I0518 23:45:46.080938 7 log.go:172] (0xc002bee2c0) (0xc002c46960) Stream added, broadcasting: 3 I0518 23:45:46.081996 7 log.go:172] (0xc002bee2c0) Reply frame received for 3 I0518 23:45:46.082057 7 log.go:172] (0xc002bee2c0) (0xc002d40a00) Create stream I0518 23:45:46.082072 7 log.go:172] (0xc002bee2c0) (0xc002d40a00) Stream added, broadcasting: 5 I0518 23:45:46.082840 7 log.go:172] (0xc002bee2c0) Reply frame received for 5 I0518 23:45:46.236393 7 log.go:172] (0xc002bee2c0) Data frame received for 3 I0518 23:45:46.236420 7 log.go:172] (0xc002bee2c0) Data frame received for 5 I0518 23:45:46.236448 7 log.go:172] (0xc002d40a00) (5) Data frame handling I0518 23:45:46.236502 7 log.go:172] (0xc002c46960) (3) Data frame handling I0518 23:45:46.236562 7 log.go:172] (0xc002c46960) (3) Data frame sent I0518 23:45:46.236588 7 log.go:172] (0xc002bee2c0) Data frame received for 3 I0518 23:45:46.236616 7 log.go:172] (0xc002c46960) (3) Data frame handling I0518 23:45:46.238825 7 log.go:172] (0xc002bee2c0) Data frame received for 1 I0518 23:45:46.238842 7 log.go:172] (0xc002c468c0) (1) Data frame handling I0518 23:45:46.238854 7 log.go:172] (0xc002c468c0) (1) Data frame sent I0518 23:45:46.238862 7 log.go:172] (0xc002bee2c0) (0xc002c468c0) Stream removed, broadcasting: 1 I0518 23:45:46.238926 7 log.go:172] (0xc002bee2c0) (0xc002c468c0) Stream removed, broadcasting: 1 I0518 23:45:46.238934 7 log.go:172] (0xc002bee2c0) (0xc002c46960) Stream removed, broadcasting: 3 I0518 23:45:46.238974 7 log.go:172] (0xc002bee2c0) Go away received I0518 23:45:46.239072 7 log.go:172] (0xc002bee2c0) (0xc002d40a00) Stream removed, broadcasting: 5 May 18 23:45:46.239: INFO: Found all expected endpoints: [netserver-0] May 18 23:45:46.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.68:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3678 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 18 23:45:46.245: INFO: >>> kubeConfig: /root/.kube/config I0518 23:45:46.273729 7 log.go:172] (0xc0023568f0) (0xc002b065a0) Create stream I0518 23:45:46.273755 7 log.go:172] (0xc0023568f0) (0xc002b065a0) Stream added, broadcasting: 1 I0518 23:45:46.275884 7 log.go:172] (0xc0023568f0) Reply frame received for 1 I0518 23:45:46.275919 7 log.go:172] (0xc0023568f0) (0xc002b06640) Create stream I0518 23:45:46.275931 7 log.go:172] (0xc0023568f0) (0xc002b06640) Stream added, broadcasting: 3 I0518 23:45:46.276868 7 log.go:172] (0xc0023568f0) Reply frame received for 3 I0518 23:45:46.276906 7 log.go:172] (0xc0023568f0) (0xc002c46a00) Create stream I0518 23:45:46.276919 7 log.go:172] (0xc0023568f0) (0xc002c46a00) Stream added, broadcasting: 5 I0518 23:45:46.278045 7 log.go:172] (0xc0023568f0) Reply frame received for 5 I0518 23:45:46.350438 7 log.go:172] (0xc0023568f0) Data frame received for 3 I0518 23:45:46.350522 7 log.go:172] (0xc002b06640) (3) Data frame handling I0518 23:45:46.350567 7 log.go:172] (0xc002b06640) (3) Data frame sent I0518 23:45:46.350588 7 log.go:172] (0xc0023568f0) Data frame received for 3 I0518 23:45:46.350603 7 log.go:172] (0xc002b06640) (3) Data frame handling I0518 23:45:46.350627 7 log.go:172] (0xc0023568f0) Data frame received for 5 I0518 23:45:46.350651 7 log.go:172] (0xc002c46a00) (5) Data frame handling I0518 23:45:46.352406 7 log.go:172] (0xc0023568f0) Data frame received for 1 I0518 23:45:46.352433 7 log.go:172] (0xc002b065a0) (1) Data frame handling I0518 23:45:46.352445 7 log.go:172] (0xc002b065a0) (1) Data frame sent I0518 23:45:46.352465 7 log.go:172] (0xc0023568f0) (0xc002b065a0) Stream removed, broadcasting: 1 I0518 23:45:46.352525 7 log.go:172] (0xc0023568f0) Go away received I0518 23:45:46.352575 7 log.go:172] (0xc0023568f0) (0xc002b065a0) Stream removed, broadcasting: 1 I0518 23:45:46.352613 7 log.go:172] (0xc0023568f0) (0xc002b06640) Stream removed, broadcasting: 3 I0518 23:45:46.352630 7 log.go:172] (0xc0023568f0) (0xc002c46a00) Stream removed, broadcasting: 5 May 18 23:45:46.352: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:46.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3678" for this suite. • [SLOW TEST:26.669 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":20,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:46.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 18 23:45:46.438: INFO: Waiting up to 5m0s for pod "pod-fda04612-2d2d-498a-b4ee-cfbc81600ace" in namespace "emptydir-9115" to be "Succeeded or Failed" May 18 23:45:46.485: INFO: Pod "pod-fda04612-2d2d-498a-b4ee-cfbc81600ace": Phase="Pending", Reason="", readiness=false. Elapsed: 46.949849ms May 18 23:45:48.617: INFO: Pod "pod-fda04612-2d2d-498a-b4ee-cfbc81600ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178016448s May 18 23:45:50.670: INFO: Pod "pod-fda04612-2d2d-498a-b4ee-cfbc81600ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231713434s STEP: Saw pod success May 18 23:45:50.670: INFO: Pod "pod-fda04612-2d2d-498a-b4ee-cfbc81600ace" satisfied condition "Succeeded or Failed" May 18 23:45:50.673: INFO: Trying to get logs from node latest-worker2 pod pod-fda04612-2d2d-498a-b4ee-cfbc81600ace container test-container: STEP: delete the pod May 18 23:45:50.722: INFO: Waiting for pod pod-fda04612-2d2d-498a-b4ee-cfbc81600ace to disappear May 18 23:45:50.731: INFO: Pod pod-fda04612-2d2d-498a-b4ee-cfbc81600ace no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:50.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9115" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":21,"skipped":460,"failed":0} SSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:50.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:50.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5959" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":22,"skipped":466,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:50.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-855b9645-4c3f-41e2-9a92-f82fe8b3163f STEP: Creating secret with name secret-projected-all-test-volume-06932152-57a9-4ee7-8a80-d5a1f8b0e848 STEP: Creating a pod to test Check all projections for projected volume plugin May 18 23:45:50.984: INFO: Waiting up to 5m0s for pod "projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46" in namespace "projected-4651" to be "Succeeded or Failed" May 18 23:45:50.989: INFO: Pod "projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138886ms May 18 23:45:53.222: INFO: Pod "projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237553258s May 18 23:45:55.412: INFO: Pod "projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.427288252s STEP: Saw pod success May 18 23:45:55.412: INFO: Pod "projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46" satisfied condition "Succeeded or Failed" May 18 23:45:55.415: INFO: Trying to get logs from node latest-worker2 pod projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46 container projected-all-volume-test: STEP: delete the pod May 18 23:45:55.528: INFO: Waiting for pod projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46 to disappear May 18 23:45:55.568: INFO: Pod projected-volume-0c71171c-0f15-4537-b7cd-bdfb33e0bb46 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:45:55.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4651" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":23,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:45:55.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4a7a426f-87db-47c4-8b39-3194d67b6787 in namespace container-probe-2537 May 18 23:45:59.710: INFO: Started pod liveness-4a7a426f-87db-47c4-8b39-3194d67b6787 in namespace container-probe-2537 STEP: checking the pod's current state and verifying that restartCount is present May 18 23:45:59.713: INFO: Initial restart count of pod liveness-4a7a426f-87db-47c4-8b39-3194d67b6787 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:00.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2537" for this suite. • [SLOW TEST:244.758 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":24,"skipped":520,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:00.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 18 23:50:00.387: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:15.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3895" for this suite. • [SLOW TEST:14.859 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":25,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:15.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 18 23:50:15.265: INFO: Waiting up to 5m0s for pod "downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d" in namespace "downward-api-1050" to be "Succeeded or Failed" May 18 23:50:15.269: INFO: Pod "downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.096301ms May 18 23:50:17.301: INFO: Pod "downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035531731s May 18 23:50:19.306: INFO: Pod "downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040555794s STEP: Saw pod success May 18 23:50:19.306: INFO: Pod "downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d" satisfied condition "Succeeded or Failed" May 18 23:50:19.309: INFO: Trying to get logs from node latest-worker pod downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d container dapi-container: STEP: delete the pod May 18 23:50:19.377: INFO: Waiting for pod downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d to disappear May 18 23:50:19.382: INFO: Pod downward-api-c1506606-6bff-4ee9-81f8-ab185f5b9f2d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:19.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1050" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":567,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:19.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7f03bcf8-3e4e-49dc-b4b1-eb96ac24e6d0 STEP: Creating a pod to test consume configMaps May 18 23:50:19.540: INFO: Waiting up to 5m0s for pod "pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f" in namespace "configmap-1695" to be "Succeeded or Failed" May 18 23:50:19.544: INFO: Pod "pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.714361ms May 18 23:50:21.549: INFO: Pod "pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008491638s May 18 23:50:23.553: INFO: Pod "pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013117012s STEP: Saw pod success May 18 23:50:23.553: INFO: Pod "pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f" satisfied condition "Succeeded or Failed" May 18 23:50:23.556: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f container configmap-volume-test: STEP: delete the pod May 18 23:50:23.626: INFO: Waiting for pod pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f to disappear May 18 23:50:23.634: INFO: Pod pod-configmaps-0dd1f8f0-ab42-484b-a649-97e507bfa49f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:23.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1695" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":27,"skipped":570,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:23.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:23.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5225" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":28,"skipped":579,"failed":0} ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:23.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-b7e46085-8560-41c1-b1c5-a987fc63ebaf STEP: Creating secret with name s-test-opt-upd-8df96d70-260f-47cf-8eb0-722e81321380 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b7e46085-8560-41c1-b1c5-a987fc63ebaf STEP: Updating secret s-test-opt-upd-8df96d70-260f-47cf-8eb0-722e81321380 STEP: Creating secret with name s-test-opt-create-5f02a3f4-91e4-4721-bf5e-7461ccc14fd1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:34.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5129" for this suite. • [SLOW TEST:10.266 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":579,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:34.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 18 23:50:38.252: INFO: Pod pod-hostip-8f66c058-1b19-4e40-aba6-e327c8c79a95 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:50:38.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-407" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":30,"skipped":594,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:50:38.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4386 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-4386 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4386 May 18 23:50:38.367: INFO: Found 0 stateful pods, waiting for 1 May 18 23:50:48.371: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 18 23:50:48.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 18 23:50:51.389: INFO: stderr: "I0518 23:50:51.266818 35 log.go:172] (0xc000c0e580) (0xc00082d040) Create stream\nI0518 23:50:51.266892 35 log.go:172] (0xc000c0e580) (0xc00082d040) Stream added, broadcasting: 1\nI0518 23:50:51.270499 35 log.go:172] (0xc000c0e580) Reply frame received for 1\nI0518 23:50:51.270547 35 log.go:172] (0xc000c0e580) (0xc00082d5e0) Create stream\nI0518 23:50:51.270560 35 log.go:172] (0xc000c0e580) (0xc00082d5e0) Stream added, broadcasting: 3\nI0518 23:50:51.271531 35 log.go:172] (0xc000c0e580) Reply frame received for 3\nI0518 23:50:51.271579 35 log.go:172] (0xc000c0e580) (0xc000822dc0) Create stream\nI0518 23:50:51.271600 35 log.go:172] (0xc000c0e580) (0xc000822dc0) Stream added, broadcasting: 5\nI0518 23:50:51.272277 35 log.go:172] (0xc000c0e580) Reply frame received for 5\nI0518 23:50:51.359463 35 log.go:172] (0xc000c0e580) Data frame received for 5\nI0518 23:50:51.359497 35 log.go:172] (0xc000822dc0) (5) Data frame handling\nI0518 23:50:51.359524 35 log.go:172] (0xc000822dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0518 23:50:51.380678 35 log.go:172] (0xc000c0e580) Data frame received for 3\nI0518 23:50:51.380696 35 log.go:172] (0xc00082d5e0) (3) Data frame handling\nI0518 23:50:51.380713 35 log.go:172] (0xc000c0e580) Data frame received for 5\nI0518 23:50:51.380760 35 log.go:172] (0xc000822dc0) (5) Data frame handling\nI0518 23:50:51.380808 35 log.go:172] (0xc00082d5e0) (3) Data frame sent\nI0518 23:50:51.380833 35 log.go:172] (0xc000c0e580) Data frame received for 3\nI0518 23:50:51.380855 35 log.go:172] (0xc00082d5e0) (3) Data frame handling\nI0518 23:50:51.382857 35 log.go:172] (0xc000c0e580) Data frame received for 1\nI0518 23:50:51.382873 35 log.go:172] (0xc00082d040) (1) Data frame handling\nI0518 23:50:51.382891 35 log.go:172] (0xc00082d040) (1) Data frame sent\nI0518 23:50:51.382906 35 log.go:172] (0xc000c0e580) (0xc00082d040) Stream removed, broadcasting: 1\nI0518 23:50:51.383205 35 log.go:172] (0xc000c0e580) (0xc00082d040) Stream removed, broadcasting: 1\nI0518 23:50:51.383219 35 log.go:172] (0xc000c0e580) (0xc00082d5e0) Stream removed, broadcasting: 3\nI0518 23:50:51.383225 35 log.go:172] (0xc000c0e580) (0xc000822dc0) Stream removed, broadcasting: 5\nI0518 23:50:51.383282 35 log.go:172] (0xc000c0e580) Go away received\n" May 18 23:50:51.389: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 18 23:50:51.389: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 18 23:50:51.393: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 18 23:51:01.398: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 18 23:51:01.398: INFO: Waiting for statefulset status.replicas updated to 0 May 18 23:51:01.440: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:01.440: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:01.440: INFO: May 18 23:51:01.440: INFO: StatefulSet ss has not reached scale 3, at 1 May 18 23:51:02.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968093427s May 18 23:51:03.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96195575s May 18 23:51:04.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.956437099s May 18 23:51:05.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.951935729s May 18 23:51:06.466: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.947581389s May 18 23:51:07.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.941982396s May 18 23:51:08.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.9385029s May 18 23:51:09.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.932779961s May 18 23:51:10.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 927.888663ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4386 May 18 23:51:11.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:11.732: INFO: stderr: "I0518 23:51:11.631466 62 log.go:172] (0xc000bbf1e0) (0xc0009f23c0) Create stream\nI0518 23:51:11.631546 62 log.go:172] (0xc000bbf1e0) (0xc0009f23c0) Stream added, broadcasting: 1\nI0518 23:51:11.636403 62 log.go:172] (0xc000bbf1e0) Reply frame received for 1\nI0518 23:51:11.636457 62 log.go:172] (0xc000bbf1e0) (0xc000670500) Create stream\nI0518 23:51:11.636471 62 log.go:172] (0xc000bbf1e0) (0xc000670500) Stream added, broadcasting: 3\nI0518 23:51:11.637850 62 log.go:172] (0xc000bbf1e0) Reply frame received for 3\nI0518 23:51:11.637898 62 log.go:172] (0xc000bbf1e0) (0xc000670dc0) Create stream\nI0518 23:51:11.637910 62 log.go:172] (0xc000bbf1e0) (0xc000670dc0) Stream added, broadcasting: 5\nI0518 23:51:11.638932 62 log.go:172] (0xc000bbf1e0) Reply frame received for 5\nI0518 23:51:11.723833 62 log.go:172] (0xc000bbf1e0) Data frame received for 3\nI0518 23:51:11.723879 62 log.go:172] (0xc000670500) (3) Data frame handling\nI0518 23:51:11.723935 62 log.go:172] (0xc000670500) (3) Data frame sent\nI0518 23:51:11.724026 62 log.go:172] (0xc000bbf1e0) Data frame received for 5\nI0518 23:51:11.724052 62 log.go:172] (0xc000670dc0) (5) Data frame handling\nI0518 23:51:11.724069 62 log.go:172] (0xc000670dc0) (5) Data frame sent\nI0518 23:51:11.724083 62 log.go:172] (0xc000bbf1e0) Data frame received for 5\nI0518 23:51:11.724096 62 log.go:172] (0xc000670dc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0518 23:51:11.724131 62 log.go:172] (0xc000bbf1e0) Data frame received for 3\nI0518 23:51:11.724177 62 log.go:172] (0xc000670500) (3) Data frame handling\nI0518 23:51:11.726070 62 log.go:172] (0xc000bbf1e0) Data frame received for 1\nI0518 23:51:11.726104 62 log.go:172] (0xc0009f23c0) (1) Data frame handling\nI0518 23:51:11.726136 62 log.go:172] (0xc0009f23c0) (1) Data frame sent\nI0518 23:51:11.726169 62 log.go:172] (0xc000bbf1e0) (0xc0009f23c0) Stream removed, broadcasting: 1\nI0518 23:51:11.726201 62 log.go:172] (0xc000bbf1e0) Go away received\nI0518 23:51:11.726578 62 log.go:172] (0xc000bbf1e0) (0xc0009f23c0) Stream removed, broadcasting: 1\nI0518 23:51:11.726600 62 log.go:172] (0xc000bbf1e0) (0xc000670500) Stream removed, broadcasting: 3\nI0518 23:51:11.726613 62 log.go:172] (0xc000bbf1e0) (0xc000670dc0) Stream removed, broadcasting: 5\n" May 18 23:51:11.732: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 18 23:51:11.732: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 18 23:51:11.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:11.932: INFO: stderr: "I0518 23:51:11.866901 81 log.go:172] (0xc00003a160) (0xc0000ccf00) Create stream\nI0518 23:51:11.866956 81 log.go:172] (0xc00003a160) (0xc0000ccf00) Stream added, broadcasting: 1\nI0518 23:51:11.868723 81 log.go:172] (0xc00003a160) Reply frame received for 1\nI0518 23:51:11.868775 81 log.go:172] (0xc00003a160) (0xc00039efa0) Create stream\nI0518 23:51:11.868883 81 log.go:172] (0xc00003a160) (0xc00039efa0) Stream added, broadcasting: 3\nI0518 23:51:11.869972 81 log.go:172] (0xc00003a160) Reply frame received for 3\nI0518 23:51:11.870010 81 log.go:172] (0xc00003a160) (0xc00078a780) Create stream\nI0518 23:51:11.870024 81 log.go:172] (0xc00003a160) (0xc00078a780) Stream added, broadcasting: 5\nI0518 23:51:11.870672 81 log.go:172] (0xc00003a160) Reply frame received for 5\nI0518 23:51:11.926709 81 log.go:172] (0xc00003a160) Data frame received for 5\nI0518 23:51:11.926748 81 log.go:172] (0xc00078a780) (5) Data frame handling\nI0518 23:51:11.926760 81 log.go:172] (0xc00078a780) (5) Data frame sent\nI0518 23:51:11.926770 81 log.go:172] (0xc00003a160) Data frame received for 5\nI0518 23:51:11.926777 81 log.go:172] (0xc00078a780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0518 23:51:11.926816 81 log.go:172] (0xc00003a160) Data frame received for 3\nI0518 23:51:11.926845 81 log.go:172] (0xc00039efa0) (3) Data frame handling\nI0518 23:51:11.926875 81 log.go:172] (0xc00039efa0) (3) Data frame sent\nI0518 23:51:11.926893 81 log.go:172] (0xc00003a160) Data frame received for 3\nI0518 23:51:11.926907 81 log.go:172] (0xc00039efa0) (3) Data frame handling\nI0518 23:51:11.928229 81 log.go:172] (0xc00003a160) Data frame received for 1\nI0518 23:51:11.928264 81 log.go:172] (0xc0000ccf00) (1) Data frame handling\nI0518 23:51:11.928278 81 log.go:172] (0xc0000ccf00) (1) Data frame sent\nI0518 23:51:11.928290 81 log.go:172] (0xc00003a160) (0xc0000ccf00) Stream removed, broadcasting: 1\nI0518 23:51:11.928338 81 log.go:172] (0xc00003a160) Go away received\nI0518 23:51:11.928608 81 log.go:172] (0xc00003a160) (0xc0000ccf00) Stream removed, broadcasting: 1\nI0518 23:51:11.928632 81 log.go:172] (0xc00003a160) (0xc00039efa0) Stream removed, broadcasting: 3\nI0518 23:51:11.928641 81 log.go:172] (0xc00003a160) (0xc00078a780) Stream removed, broadcasting: 5\n" May 18 23:51:11.932: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 18 23:51:11.932: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 18 23:51:11.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:12.156: INFO: stderr: "I0518 23:51:12.078616 102 log.go:172] (0xc00091a0b0) (0xc00051e3c0) Create stream\nI0518 23:51:12.078683 102 log.go:172] (0xc00091a0b0) (0xc00051e3c0) Stream added, broadcasting: 1\nI0518 23:51:12.082114 102 log.go:172] (0xc00091a0b0) Reply frame received for 1\nI0518 23:51:12.082167 102 log.go:172] (0xc00091a0b0) (0xc0004bcf00) Create stream\nI0518 23:51:12.082188 102 log.go:172] (0xc00091a0b0) (0xc0004bcf00) Stream added, broadcasting: 3\nI0518 23:51:12.083221 102 log.go:172] (0xc00091a0b0) Reply frame received for 3\nI0518 23:51:12.083264 102 log.go:172] (0xc00091a0b0) (0xc0000dd9a0) Create stream\nI0518 23:51:12.083278 102 log.go:172] (0xc00091a0b0) (0xc0000dd9a0) Stream added, broadcasting: 5\nI0518 23:51:12.084609 102 log.go:172] (0xc00091a0b0) Reply frame received for 5\nI0518 23:51:12.149068 102 log.go:172] (0xc00091a0b0) Data frame received for 5\nI0518 23:51:12.149286 102 log.go:172] (0xc0000dd9a0) (5) Data frame handling\nI0518 23:51:12.149310 102 log.go:172] (0xc0000dd9a0) (5) Data frame sent\nI0518 23:51:12.149327 102 log.go:172] (0xc00091a0b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0518 23:51:12.149351 102 log.go:172] (0xc00091a0b0) Data frame received for 3\nI0518 23:51:12.149383 102 log.go:172] (0xc0004bcf00) (3) Data frame handling\nI0518 23:51:12.149396 102 log.go:172] (0xc0004bcf00) (3) Data frame sent\nI0518 23:51:12.149409 102 log.go:172] (0xc00091a0b0) Data frame received for 3\nI0518 23:51:12.149425 102 log.go:172] (0xc0004bcf00) (3) Data frame handling\nI0518 23:51:12.149475 102 log.go:172] (0xc0000dd9a0) (5) Data frame handling\nI0518 23:51:12.151147 102 log.go:172] (0xc00091a0b0) Data frame received for 1\nI0518 23:51:12.151177 102 log.go:172] (0xc00051e3c0) (1) Data frame handling\nI0518 23:51:12.151205 102 log.go:172] (0xc00051e3c0) (1) Data frame sent\nI0518 23:51:12.151253 102 log.go:172] (0xc00091a0b0) (0xc00051e3c0) Stream removed, broadcasting: 1\nI0518 23:51:12.151285 102 log.go:172] (0xc00091a0b0) Go away received\nI0518 23:51:12.151626 102 log.go:172] (0xc00091a0b0) (0xc00051e3c0) Stream removed, broadcasting: 1\nI0518 23:51:12.151653 102 log.go:172] (0xc00091a0b0) (0xc0004bcf00) Stream removed, broadcasting: 3\nI0518 23:51:12.151663 102 log.go:172] (0xc00091a0b0) (0xc0000dd9a0) Stream removed, broadcasting: 5\n" May 18 23:51:12.156: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 18 23:51:12.156: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 18 23:51:12.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 18 23:51:12.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 18 23:51:12.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 18 23:51:12.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 18 23:51:12.371: INFO: stderr: "I0518 23:51:12.292849 125 log.go:172] (0xc000968000) (0xc0004fe1e0) Create stream\nI0518 23:51:12.292956 125 log.go:172] (0xc000968000) (0xc0004fe1e0) Stream added, broadcasting: 1\nI0518 23:51:12.296567 125 log.go:172] (0xc000968000) Reply frame received for 1\nI0518 23:51:12.296625 125 log.go:172] (0xc000968000) (0xc000432d20) Create stream\nI0518 23:51:12.296649 125 log.go:172] (0xc000968000) (0xc000432d20) Stream added, broadcasting: 3\nI0518 23:51:12.297907 125 log.go:172] (0xc000968000) Reply frame received for 3\nI0518 23:51:12.297960 125 log.go:172] (0xc000968000) (0xc0000ded20) Create stream\nI0518 23:51:12.297993 125 log.go:172] (0xc000968000) (0xc0000ded20) Stream added, broadcasting: 5\nI0518 23:51:12.298962 125 log.go:172] (0xc000968000) Reply frame received for 5\nI0518 23:51:12.364101 125 log.go:172] (0xc000968000) Data frame received for 5\nI0518 23:51:12.364142 125 log.go:172] (0xc0000ded20) (5) Data frame handling\nI0518 23:51:12.364162 125 log.go:172] (0xc0000ded20) (5) Data frame sent\nI0518 23:51:12.364176 125 log.go:172] (0xc000968000) Data frame received for 5\nI0518 23:51:12.364190 125 log.go:172] (0xc0000ded20) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0518 23:51:12.364219 125 log.go:172] (0xc000968000) Data frame received for 3\nI0518 23:51:12.364228 125 log.go:172] (0xc000432d20) (3) Data frame handling\nI0518 23:51:12.364237 125 log.go:172] (0xc000432d20) (3) Data frame sent\nI0518 23:51:12.364266 125 log.go:172] (0xc000968000) Data frame received for 3\nI0518 23:51:12.364282 125 log.go:172] (0xc000432d20) (3) Data frame handling\nI0518 23:51:12.366061 125 log.go:172] (0xc000968000) Data frame received for 1\nI0518 23:51:12.366088 125 log.go:172] (0xc0004fe1e0) (1) Data frame handling\nI0518 23:51:12.366101 125 log.go:172] (0xc0004fe1e0) (1) Data frame sent\nI0518 23:51:12.366115 125 log.go:172] (0xc000968000) (0xc0004fe1e0) Stream removed, broadcasting: 1\nI0518 23:51:12.366129 125 log.go:172] (0xc000968000) Go away received\nI0518 23:51:12.366525 125 log.go:172] (0xc000968000) (0xc0004fe1e0) Stream removed, broadcasting: 1\nI0518 23:51:12.366541 125 log.go:172] (0xc000968000) (0xc000432d20) Stream removed, broadcasting: 3\nI0518 23:51:12.366548 125 log.go:172] (0xc000968000) (0xc0000ded20) Stream removed, broadcasting: 5\n" May 18 23:51:12.371: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 18 23:51:12.371: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 18 23:51:12.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 18 23:51:12.612: INFO: stderr: "I0518 23:51:12.510247 145 log.go:172] (0xc00003a160) (0xc0002a2e60) Create stream\nI0518 23:51:12.510309 145 log.go:172] (0xc00003a160) (0xc0002a2e60) Stream added, broadcasting: 1\nI0518 23:51:12.512922 145 log.go:172] (0xc00003a160) Reply frame received for 1\nI0518 23:51:12.512975 145 log.go:172] (0xc00003a160) (0xc0002a3cc0) Create stream\nI0518 23:51:12.513003 145 log.go:172] (0xc00003a160) (0xc0002a3cc0) Stream added, broadcasting: 3\nI0518 23:51:12.514481 145 log.go:172] (0xc00003a160) Reply frame received for 3\nI0518 23:51:12.514528 145 log.go:172] (0xc00003a160) (0xc00014f540) Create stream\nI0518 23:51:12.514547 145 log.go:172] (0xc00003a160) (0xc00014f540) Stream added, broadcasting: 5\nI0518 23:51:12.515626 145 log.go:172] (0xc00003a160) Reply frame received for 5\nI0518 23:51:12.572802 145 log.go:172] (0xc00003a160) Data frame received for 5\nI0518 23:51:12.572839 145 log.go:172] (0xc00014f540) (5) Data frame handling\nI0518 23:51:12.572856 145 log.go:172] (0xc00014f540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0518 23:51:12.603461 145 log.go:172] (0xc00003a160) Data frame received for 3\nI0518 23:51:12.603505 145 log.go:172] (0xc0002a3cc0) (3) Data frame handling\nI0518 23:51:12.603544 145 log.go:172] (0xc0002a3cc0) (3) Data frame sent\nI0518 23:51:12.603588 145 log.go:172] (0xc00003a160) Data frame received for 3\nI0518 23:51:12.603627 145 log.go:172] (0xc0002a3cc0) (3) Data frame handling\nI0518 23:51:12.603931 145 log.go:172] (0xc00003a160) Data frame received for 5\nI0518 23:51:12.603971 145 log.go:172] (0xc00014f540) (5) Data frame handling\nI0518 23:51:12.606414 145 log.go:172] (0xc00003a160) Data frame received for 1\nI0518 23:51:12.606455 145 log.go:172] (0xc0002a2e60) (1) Data frame handling\nI0518 23:51:12.606498 145 log.go:172] (0xc0002a2e60) (1) Data frame sent\nI0518 23:51:12.606533 145 log.go:172] (0xc00003a160) (0xc0002a2e60) Stream removed, broadcasting: 1\nI0518 23:51:12.606579 145 log.go:172] (0xc00003a160) Go away received\nI0518 23:51:12.607053 145 log.go:172] (0xc00003a160) (0xc0002a2e60) Stream removed, broadcasting: 1\nI0518 23:51:12.607076 145 log.go:172] (0xc00003a160) (0xc0002a3cc0) Stream removed, broadcasting: 3\nI0518 23:51:12.607089 145 log.go:172] (0xc00003a160) (0xc00014f540) Stream removed, broadcasting: 5\n" May 18 23:51:12.612: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 18 23:51:12.612: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 18 23:51:12.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 18 23:51:12.949: INFO: stderr: "I0518 23:51:12.747191 165 log.go:172] (0xc000a900b0) (0xc00043ac80) Create stream\nI0518 23:51:12.747276 165 log.go:172] (0xc000a900b0) (0xc00043ac80) Stream added, broadcasting: 1\nI0518 23:51:12.750971 165 log.go:172] (0xc000a900b0) Reply frame received for 1\nI0518 23:51:12.751020 165 log.go:172] (0xc000a900b0) (0xc0001be6e0) Create stream\nI0518 23:51:12.751036 165 log.go:172] (0xc000a900b0) (0xc0001be6e0) Stream added, broadcasting: 3\nI0518 23:51:12.752136 165 log.go:172] (0xc000a900b0) Reply frame received for 3\nI0518 23:51:12.752179 165 log.go:172] (0xc000a900b0) (0xc000350c80) Create stream\nI0518 23:51:12.752195 165 log.go:172] (0xc000a900b0) (0xc000350c80) Stream added, broadcasting: 5\nI0518 23:51:12.753379 165 log.go:172] (0xc000a900b0) Reply frame received for 5\nI0518 23:51:12.810062 165 log.go:172] (0xc000a900b0) Data frame received for 5\nI0518 23:51:12.810089 165 log.go:172] (0xc000350c80) (5) Data frame handling\nI0518 23:51:12.810105 165 log.go:172] (0xc000350c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0518 23:51:12.939518 165 log.go:172] (0xc000a900b0) Data frame received for 5\nI0518 23:51:12.939552 165 log.go:172] (0xc000350c80) (5) Data frame handling\nI0518 23:51:12.939570 165 log.go:172] (0xc000a900b0) Data frame received for 3\nI0518 23:51:12.939575 165 log.go:172] (0xc0001be6e0) (3) Data frame handling\nI0518 23:51:12.939581 165 log.go:172] (0xc0001be6e0) (3) Data frame sent\nI0518 23:51:12.939587 165 log.go:172] (0xc000a900b0) Data frame received for 3\nI0518 23:51:12.939592 165 log.go:172] (0xc0001be6e0) (3) Data frame handling\nI0518 23:51:12.942034 165 log.go:172] (0xc000a900b0) Data frame received for 1\nI0518 23:51:12.942052 165 log.go:172] (0xc00043ac80) (1) Data frame handling\nI0518 23:51:12.942060 165 log.go:172] (0xc00043ac80) (1) Data frame sent\nI0518 23:51:12.942070 165 log.go:172] (0xc000a900b0) (0xc00043ac80) Stream removed, broadcasting: 1\nI0518 23:51:12.942082 165 log.go:172] (0xc000a900b0) Go away received\nI0518 23:51:12.942550 165 log.go:172] (0xc000a900b0) (0xc00043ac80) Stream removed, broadcasting: 1\nI0518 23:51:12.942583 165 log.go:172] (0xc000a900b0) (0xc0001be6e0) Stream removed, broadcasting: 3\nI0518 23:51:12.942602 165 log.go:172] (0xc000a900b0) (0xc000350c80) Stream removed, broadcasting: 5\n" May 18 23:51:12.949: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 18 23:51:12.949: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 18 23:51:12.949: INFO: Waiting for statefulset status.replicas updated to 0 May 18 23:51:12.966: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 18 23:51:22.997: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 18 23:51:22.997: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 18 23:51:22.997: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 18 23:51:23.033: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:23.033: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:23.033: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:23.033: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:23.033: INFO: May 18 23:51:23.033: INFO: StatefulSet ss has not reached scale 0, at 3 May 18 23:51:24.224: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:24.224: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:24.224: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:24.224: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:24.224: INFO: May 18 23:51:24.224: INFO: StatefulSet ss has not reached scale 0, at 3 May 18 23:51:25.229: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:25.229: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:25.229: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:25.229: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:25.229: INFO: May 18 23:51:25.229: INFO: StatefulSet ss has not reached scale 0, at 3 May 18 23:51:26.234: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:26.234: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:26.235: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:26.235: INFO: May 18 23:51:26.235: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:27.272: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:27.272: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:27.272: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:27.272: INFO: May 18 23:51:27.272: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:28.277: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:28.277: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:28.277: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:28.277: INFO: May 18 23:51:28.277: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:29.282: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:29.282: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:29.282: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:29.282: INFO: May 18 23:51:29.282: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:30.287: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:30.287: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:30.287: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:30.287: INFO: May 18 23:51:30.287: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:31.291: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:31.291: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:31.291: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:31.291: INFO: May 18 23:51:31.291: INFO: StatefulSet ss has not reached scale 0, at 2 May 18 23:51:32.296: INFO: POD NODE PHASE GRACE CONDITIONS May 18 23:51:32.296: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:50:38 +0000 UTC }] May 18 23:51:32.296: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:13 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-18 23:51:01 +0000 UTC }] May 18 23:51:32.296: INFO: May 18 23:51:32.296: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4386 May 18 23:51:33.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:33.443: INFO: rc: 1 May 18 23:51:33.443: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 18 23:51:43.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:43.558: INFO: rc: 1 May 18 23:51:43.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:51:53.558: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:51:53.662: INFO: rc: 1 May 18 23:51:53.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:03.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:03.764: INFO: rc: 1 May 18 23:52:03.764: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:13.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:13.910: INFO: rc: 1 May 18 23:52:13.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:23.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:24.013: INFO: rc: 1 May 18 23:52:24.013: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:34.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:34.114: INFO: rc: 1 May 18 23:52:34.114: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:44.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:44.231: INFO: rc: 1 May 18 23:52:44.231: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:52:54.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:52:54.348: INFO: rc: 1 May 18 23:52:54.348: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:04.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:04.459: INFO: rc: 1 May 18 23:53:04.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:14.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:14.566: INFO: rc: 1 May 18 23:53:14.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:24.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:24.674: INFO: rc: 1 May 18 23:53:24.674: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:34.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:34.790: INFO: rc: 1 May 18 23:53:34.790: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:44.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:44.902: INFO: rc: 1 May 18 23:53:44.902: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:53:54.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:53:54.992: INFO: rc: 1 May 18 23:53:54.992: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:04.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:05.087: INFO: rc: 1 May 18 23:54:05.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:15.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:15.176: INFO: rc: 1 May 18 23:54:15.176: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:25.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:25.290: INFO: rc: 1 May 18 23:54:25.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:35.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:35.386: INFO: rc: 1 May 18 23:54:35.386: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:45.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:45.501: INFO: rc: 1 May 18 23:54:45.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:54:55.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:54:55.615: INFO: rc: 1 May 18 23:54:55.615: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:05.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:05.719: INFO: rc: 1 May 18 23:55:05.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:15.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:15.817: INFO: rc: 1 May 18 23:55:15.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:25.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:25.928: INFO: rc: 1 May 18 23:55:25.928: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:35.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:36.025: INFO: rc: 1 May 18 23:55:36.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:46.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:46.123: INFO: rc: 1 May 18 23:55:46.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:55:56.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:55:56.237: INFO: rc: 1 May 18 23:55:56.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:56:06.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:56:06.345: INFO: rc: 1 May 18 23:56:06.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:56:16.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:56:16.456: INFO: rc: 1 May 18 23:56:16.456: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:56:26.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:56:26.556: INFO: rc: 1 May 18 23:56:26.556: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 18 23:56:36.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4386 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 18 23:56:36.664: INFO: rc: 1 May 18 23:56:36.664: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 18 23:56:36.664: INFO: Scaling statefulset ss to 0 May 18 23:56:36.684: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 18 23:56:36.686: INFO: Deleting all statefulset in ns statefulset-4386 May 18 23:56:36.689: INFO: Scaling statefulset ss to 0 May 18 23:56:36.698: INFO: Waiting for statefulset status.replicas updated to 0 May 18 23:56:36.700: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:56:36.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4386" for this suite. • [SLOW TEST:358.467 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":31,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:56:36.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 18 23:56:40.920: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:56:40.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8586" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:56:40.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-8614529a-232c-4204-9c42-d2eb87a7db75 STEP: Creating configMap with name cm-test-opt-upd-69c6fdad-32cf-41d3-b66a-3bd80234b6ab STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8614529a-232c-4204-9c42-d2eb87a7db75 STEP: Updating configmap cm-test-opt-upd-69c6fdad-32cf-41d3-b66a-3bd80234b6ab STEP: Creating configMap with name cm-test-opt-create-bdc3ebe8-133c-43cf-b67c-59079fb65b88 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:56:49.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6152" for this suite. • [SLOW TEST:8.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":738,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:56:49.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-145757f7-0cd2-4d09-8982-ebe617b20d49 STEP: Creating a pod to test consume configMaps May 18 23:56:49.503: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f" in namespace "projected-2448" to be "Succeeded or Failed" May 18 23:56:49.523: INFO: Pod "pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.147748ms May 18 23:56:51.527: INFO: Pod "pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024147446s May 18 23:56:53.531: INFO: Pod "pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028345747s STEP: Saw pod success May 18 23:56:53.531: INFO: Pod "pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f" satisfied condition "Succeeded or Failed" May 18 23:56:53.535: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f container projected-configmap-volume-test: STEP: delete the pod May 18 23:56:53.764: INFO: Waiting for pod pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f to disappear May 18 23:56:53.778: INFO: Pod pod-projected-configmaps-3c90a635-24b2-440b-865d-9ec97b50e42f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:56:53.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2448" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":748,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:56:53.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-c4ea5e79-a35b-41a7-bcba-680d523952e4 STEP: Creating a pod to test consume secrets May 18 23:56:53.937: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408" in namespace "projected-5023" to be "Succeeded or Failed" May 18 23:56:53.946: INFO: Pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408": Phase="Pending", Reason="", readiness=false. Elapsed: 9.170231ms May 18 23:56:55.975: INFO: Pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037533551s May 18 23:56:57.979: INFO: Pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408": Phase="Running", Reason="", readiness=true. Elapsed: 4.04201836s May 18 23:56:59.982: INFO: Pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045064736s STEP: Saw pod success May 18 23:56:59.982: INFO: Pod "pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408" satisfied condition "Succeeded or Failed" May 18 23:56:59.984: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408 container projected-secret-volume-test: STEP: delete the pod May 18 23:57:00.040: INFO: Waiting for pod pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408 to disappear May 18 23:57:00.050: INFO: Pod pod-projected-secrets-0c4b47dc-85aa-4b4a-93a5-c52a55e7b408 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:00.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5023" for this suite. • [SLOW TEST:6.285 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":761,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:00.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:00.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2215" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":36,"skipped":765,"failed":0} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:00.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:00.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3740" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":37,"skipped":766,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:00.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:00.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6797" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":38,"skipped":767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:00.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 18 23:57:09.058: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 18 23:57:09.075: INFO: Pod pod-with-poststart-exec-hook still exists May 18 23:57:11.075: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 18 23:57:11.080: INFO: Pod pod-with-poststart-exec-hook still exists May 18 23:57:13.075: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 18 23:57:13.080: INFO: Pod pod-with-poststart-exec-hook still exists May 18 23:57:15.075: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 18 23:57:15.080: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:15.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9992" for this suite. • [SLOW TEST:14.349 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":816,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:15.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 18 23:57:15.171: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ccd8469f-5db2-4364-b053-6ca571efe560" in namespace "security-context-test-175" to be "Succeeded or Failed" May 18 23:57:15.201: INFO: Pod "busybox-user-65534-ccd8469f-5db2-4364-b053-6ca571efe560": Phase="Pending", Reason="", readiness=false. Elapsed: 29.786106ms May 18 23:57:17.412: INFO: Pod "busybox-user-65534-ccd8469f-5db2-4364-b053-6ca571efe560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240873577s May 18 23:57:19.416: INFO: Pod "busybox-user-65534-ccd8469f-5db2-4364-b053-6ca571efe560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.244846471s May 18 23:57:19.416: INFO: Pod "busybox-user-65534-ccd8469f-5db2-4364-b053-6ca571efe560" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:19.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-175" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":817,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:19.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 18 23:57:20.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 18 23:57:22.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443041, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 18 23:57:24.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443041, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443040, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 18 23:57:27.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 18 23:57:27.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:28.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-208" for this suite. STEP: Destroying namespace "webhook-208-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.719 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":41,"skipped":822,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:29.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 18 23:57:29.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6329' May 18 23:57:29.545: INFO: stderr: "" May 18 23:57:29.545: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 18 23:57:30.700: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:30.701: INFO: Found 0 / 1 May 18 23:57:31.549: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:31.550: INFO: Found 0 / 1 May 18 23:57:32.549: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:32.549: INFO: Found 0 / 1 May 18 23:57:33.557: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:33.558: INFO: Found 1 / 1 May 18 23:57:33.558: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 18 23:57:33.604: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:33.604: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 18 23:57:33.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-98l8g --namespace=kubectl-6329 -p {"metadata":{"annotations":{"x":"y"}}}' May 18 23:57:33.712: INFO: stderr: "" May 18 23:57:33.712: INFO: stdout: "pod/agnhost-master-98l8g patched\n" STEP: checking annotations May 18 23:57:33.741: INFO: Selector matched 1 pods for map[app:agnhost] May 18 23:57:33.742: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:33.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6329" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":42,"skipped":823,"failed":0} ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:33.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 18 23:57:33.828: INFO: Waiting up to 5m0s for pod "client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f" in namespace "containers-117" to be "Succeeded or Failed" May 18 23:57:33.847: INFO: Pod "client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.884629ms May 18 23:57:35.851: INFO: Pod "client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023171154s May 18 23:57:37.855: INFO: Pod "client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02681773s STEP: Saw pod success May 18 23:57:37.855: INFO: Pod "client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f" satisfied condition "Succeeded or Failed" May 18 23:57:37.857: INFO: Trying to get logs from node latest-worker2 pod client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f container test-container: STEP: delete the pod May 18 23:57:37.935: INFO: Waiting for pod client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f to disappear May 18 23:57:37.948: INFO: Pod client-containers-eeaaab79-9bd3-4952-8a4c-0dca0826438f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:37.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-117" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":823,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:37.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 18 23:57:38.007: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:44.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4061" for this suite. • [SLOW TEST:6.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":44,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:44.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 18 23:57:45.664: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 18 23:57:47.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443065, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443065, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443065, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443065, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 18 23:57:50.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 18 23:57:50.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2373-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:57:51.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7763" for this suite. STEP: Destroying namespace "webhook-7763-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.429 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":45,"skipped":850,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:57:51.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-zzcn STEP: Creating a pod to test atomic-volume-subpath May 18 23:57:52.073: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zzcn" in namespace "subpath-7002" to be "Succeeded or Failed" May 18 23:57:52.096: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Pending", Reason="", readiness=false. Elapsed: 22.177807ms May 18 23:57:54.100: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026873211s May 18 23:57:56.104: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 4.030970731s May 18 23:57:58.112: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 6.038092546s May 18 23:58:00.116: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 8.042173429s May 18 23:58:02.138: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 10.064307241s May 18 23:58:04.142: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 12.068933419s May 18 23:58:06.147: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 14.073605973s May 18 23:58:08.151: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 16.077885812s May 18 23:58:10.156: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 18.082648424s May 18 23:58:12.160: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 20.086850846s May 18 23:58:14.165: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Running", Reason="", readiness=true. Elapsed: 22.091953631s May 18 23:58:16.170: INFO: Pod "pod-subpath-test-secret-zzcn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096592103s STEP: Saw pod success May 18 23:58:16.170: INFO: Pod "pod-subpath-test-secret-zzcn" satisfied condition "Succeeded or Failed" May 18 23:58:16.174: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-zzcn container test-container-subpath-secret-zzcn: STEP: delete the pod May 18 23:58:16.202: INFO: Waiting for pod pod-subpath-test-secret-zzcn to disappear May 18 23:58:16.212: INFO: Pod pod-subpath-test-secret-zzcn no longer exists STEP: Deleting pod pod-subpath-test-secret-zzcn May 18 23:58:16.212: INFO: Deleting pod "pod-subpath-test-secret-zzcn" in namespace "subpath-7002" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:58:16.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7002" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":46,"skipped":852,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:58:16.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 18 23:58:32.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1214" for this suite. • [SLOW TEST:16.296 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":47,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 18 23:58:32.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 18 23:58:32.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 18 23:58:32.603: INFO: Waiting for terminating namespaces to be deleted... May 18 23:58:32.607: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 18 23:58:32.613: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 18 23:58:32.613: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 18 23:58:32.613: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 18 23:58:32.613: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 18 23:58:32.613: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 18 23:58:32.613: INFO: Container kindnet-cni ready: true, restart count 0 May 18 23:58:32.613: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 18 23:58:32.613: INFO: Container kube-proxy ready: true, restart count 0 May 18 23:58:32.613: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 18 23:58:32.619: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 18 23:58:32.619: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 18 23:58:32.619: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 18 23:58:32.619: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 18 23:58:32.619: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 18 23:58:32.619: INFO: Container kindnet-cni ready: true, restart count 0 May 18 23:58:32.619: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 18 23:58:32.619: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-67bf6551-f6ef-4024-bfb1-904f07d92efd 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-67bf6551-f6ef-4024-bfb1-904f07d92efd off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-67bf6551-f6ef-4024-bfb1-904f07d92efd [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:03:40.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4304" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.333 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":48,"skipped":897,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:03:40.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0519 00:03:42.012029 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 00:03:42.012: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:03:42.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5130" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":49,"skipped":930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:03:42.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 19 00:03:49.208: INFO: 9 pods remaining May 19 00:03:49.208: INFO: 0 pods has nil DeletionTimestamp May 19 00:03:49.208: INFO: May 19 00:03:50.830: INFO: 0 pods remaining May 19 00:03:50.830: INFO: 0 pods has nil DeletionTimestamp May 19 00:03:50.830: INFO: May 19 00:03:51.518: INFO: 0 pods remaining May 19 00:03:51.518: INFO: 0 pods has nil DeletionTimestamp May 19 00:03:51.518: INFO: STEP: Gathering metrics W0519 00:03:53.734789 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 00:03:53.734: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:03:53.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1380" for this suite. • [SLOW TEST:12.381 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":50,"skipped":1006,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:03:54.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-db51aecc-1cc4-48e7-852d-7fc7cda96919 STEP: Creating a pod to test consume configMaps May 19 00:03:54.924: INFO: Waiting up to 5m0s for pod "pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6" in namespace "configmap-4071" to be "Succeeded or Failed" May 19 00:03:55.201: INFO: Pod "pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6": Phase="Pending", Reason="", readiness=false. Elapsed: 277.039066ms May 19 00:03:57.386: INFO: Pod "pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461822542s May 19 00:03:59.390: INFO: Pod "pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.466238514s STEP: Saw pod success May 19 00:03:59.390: INFO: Pod "pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6" satisfied condition "Succeeded or Failed" May 19 00:03:59.393: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6 container configmap-volume-test: STEP: delete the pod May 19 00:03:59.544: INFO: Waiting for pod pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6 to disappear May 19 00:03:59.620: INFO: Pod pod-configmaps-1403580d-a6af-4436-9a5b-20305b0388e6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:03:59.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4071" for this suite. • [SLOW TEST:5.329 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":1017,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:03:59.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:04:00.439: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:04:02.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:04:04.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443440, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:04:07.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:04:17.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9047" for this suite. STEP: Destroying namespace "webhook-9047-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.077 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":52,"skipped":1026,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:04:17.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4716.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4716.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4716.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:04:25.952: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.956: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.959: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.962: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.972: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.975: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.979: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.982: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:25.989: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:30.995: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:30.998: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.002: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.006: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.017: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.020: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.024: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.027: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:31.034: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:35.995: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:35.999: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.002: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.005: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.014: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.017: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.020: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.024: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:36.031: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:40.995: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.000: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.002: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.005: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.014: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.017: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.019: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.022: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:41.034: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:45.994: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.027: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.040: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.043: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.051: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.055: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.057: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.059: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:46.064: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:51.011: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.016: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.018: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.050: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.055: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.058: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.060: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local from pod dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807: the server could not find the requested resource (get pods dns-test-f250344e-1d76-4502-b85d-9712f15db807) May 19 00:04:51.066: INFO: Lookups using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4716.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4716.svc.cluster.local jessie_udp@dns-test-service-2.dns-4716.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4716.svc.cluster.local] May 19 00:04:56.023: INFO: DNS probes using dns-4716/dns-test-f250344e-1d76-4502-b85d-9712f15db807 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:04:56.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4716" for this suite. • [SLOW TEST:38.929 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":53,"skipped":1035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:04:56.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8273/configmap-test-d447fd39-c9e6-4198-9203-bf73723ac834 STEP: Creating a pod to test consume configMaps May 19 00:04:56.890: INFO: Waiting up to 5m0s for pod "pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6" in namespace "configmap-8273" to be "Succeeded or Failed" May 19 00:04:56.903: INFO: Pod "pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.645661ms May 19 00:04:58.918: INFO: Pod "pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028710811s May 19 00:05:00.922: INFO: Pod "pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032520182s STEP: Saw pod success May 19 00:05:00.922: INFO: Pod "pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6" satisfied condition "Succeeded or Failed" May 19 00:05:00.925: INFO: Trying to get logs from node latest-worker pod pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6 container env-test: STEP: delete the pod May 19 00:05:01.270: INFO: Waiting for pod pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6 to disappear May 19 00:05:01.293: INFO: Pod pod-configmaps-42a77b42-d7a2-43a0-8a00-745e7ada7fd6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:05:01.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8273" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":54,"skipped":1095,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:05:01.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 19 00:05:01.482: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2174 /api/v1/namespaces/watch-2174/configmaps/e2e-watch-test-watch-closed 46fc1ec9-b98f-46ec-92c2-7fddf33680ca 5811736 0 2020-05-19 00:05:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 00:05:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:05:01.482: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2174 /api/v1/namespaces/watch-2174/configmaps/e2e-watch-test-watch-closed 46fc1ec9-b98f-46ec-92c2-7fddf33680ca 5811737 0 2020-05-19 00:05:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 00:05:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 19 00:05:01.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2174 /api/v1/namespaces/watch-2174/configmaps/e2e-watch-test-watch-closed 46fc1ec9-b98f-46ec-92c2-7fddf33680ca 5811738 0 2020-05-19 00:05:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 00:05:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:05:01.554: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2174 /api/v1/namespaces/watch-2174/configmaps/e2e-watch-test-watch-closed 46fc1ec9-b98f-46ec-92c2-7fddf33680ca 5811739 0 2020-05-19 00:05:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-19 00:05:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:05:01.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2174" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":55,"skipped":1110,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:05:01.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 00:05:01.664: INFO: Waiting up to 5m0s for pod "pod-e474b381-191e-4023-96c5-9cc6db460d39" in namespace "emptydir-6705" to be "Succeeded or Failed" May 19 00:05:01.686: INFO: Pod "pod-e474b381-191e-4023-96c5-9cc6db460d39": Phase="Pending", Reason="", readiness=false. Elapsed: 22.298471ms May 19 00:05:03.690: INFO: Pod "pod-e474b381-191e-4023-96c5-9cc6db460d39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026042928s May 19 00:05:05.694: INFO: Pod "pod-e474b381-191e-4023-96c5-9cc6db460d39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030123365s STEP: Saw pod success May 19 00:05:05.694: INFO: Pod "pod-e474b381-191e-4023-96c5-9cc6db460d39" satisfied condition "Succeeded or Failed" May 19 00:05:05.697: INFO: Trying to get logs from node latest-worker pod pod-e474b381-191e-4023-96c5-9cc6db460d39 container test-container: STEP: delete the pod May 19 00:05:05.786: INFO: Waiting for pod pod-e474b381-191e-4023-96c5-9cc6db460d39 to disappear May 19 00:05:05.895: INFO: Pod pod-e474b381-191e-4023-96c5-9cc6db460d39 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:05:05.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6705" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":56,"skipped":1114,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:05:05.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 19 00:05:05.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3563' May 19 00:05:09.195: INFO: stderr: "" May 19 00:05:09.195: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 00:05:09.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:09.323: INFO: stderr: "" May 19 00:05:09.323: INFO: stdout: "update-demo-nautilus-g8czv update-demo-nautilus-nvf6c " May 19 00:05:09.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8czv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:09.419: INFO: stderr: "" May 19 00:05:09.420: INFO: stdout: "" May 19 00:05:09.420: INFO: update-demo-nautilus-g8czv is created but not running May 19 00:05:14.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:14.539: INFO: stderr: "" May 19 00:05:14.539: INFO: stdout: "update-demo-nautilus-g8czv update-demo-nautilus-nvf6c " May 19 00:05:14.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8czv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:14.643: INFO: stderr: "" May 19 00:05:14.643: INFO: stdout: "true" May 19 00:05:14.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g8czv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:14.749: INFO: stderr: "" May 19 00:05:14.749: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:05:14.749: INFO: validating pod update-demo-nautilus-g8czv May 19 00:05:14.765: INFO: got data: { "image": "nautilus.jpg" } May 19 00:05:14.766: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:05:14.766: INFO: update-demo-nautilus-g8czv is verified up and running May 19 00:05:14.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:14.867: INFO: stderr: "" May 19 00:05:14.867: INFO: stdout: "true" May 19 00:05:14.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:14.997: INFO: stderr: "" May 19 00:05:14.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:05:14.997: INFO: validating pod update-demo-nautilus-nvf6c May 19 00:05:15.008: INFO: got data: { "image": "nautilus.jpg" } May 19 00:05:15.008: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:05:15.008: INFO: update-demo-nautilus-nvf6c is verified up and running STEP: scaling down the replication controller May 19 00:05:15.010: INFO: scanned /root for discovery docs: May 19 00:05:15.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3563' May 19 00:05:16.138: INFO: stderr: "" May 19 00:05:16.138: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 00:05:16.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:16.267: INFO: stderr: "" May 19 00:05:16.268: INFO: stdout: "update-demo-nautilus-g8czv update-demo-nautilus-nvf6c " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 00:05:21.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:21.381: INFO: stderr: "" May 19 00:05:21.381: INFO: stdout: "update-demo-nautilus-g8czv update-demo-nautilus-nvf6c " STEP: Replicas for name=update-demo: expected=1 actual=2 May 19 00:05:26.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:26.498: INFO: stderr: "" May 19 00:05:26.498: INFO: stdout: "update-demo-nautilus-nvf6c " May 19 00:05:26.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:26.598: INFO: stderr: "" May 19 00:05:26.598: INFO: stdout: "true" May 19 00:05:26.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:26.691: INFO: stderr: "" May 19 00:05:26.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:05:26.691: INFO: validating pod update-demo-nautilus-nvf6c May 19 00:05:26.695: INFO: got data: { "image": "nautilus.jpg" } May 19 00:05:26.695: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:05:26.695: INFO: update-demo-nautilus-nvf6c is verified up and running STEP: scaling up the replication controller May 19 00:05:26.696: INFO: scanned /root for discovery docs: May 19 00:05:26.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3563' May 19 00:05:27.878: INFO: stderr: "" May 19 00:05:27.878: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 00:05:27.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:28.005: INFO: stderr: "" May 19 00:05:28.005: INFO: stdout: "update-demo-nautilus-8plz2 update-demo-nautilus-nvf6c " May 19 00:05:28.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8plz2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:28.101: INFO: stderr: "" May 19 00:05:28.101: INFO: stdout: "" May 19 00:05:28.101: INFO: update-demo-nautilus-8plz2 is created but not running May 19 00:05:33.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3563' May 19 00:05:33.224: INFO: stderr: "" May 19 00:05:33.224: INFO: stdout: "update-demo-nautilus-8plz2 update-demo-nautilus-nvf6c " May 19 00:05:33.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8plz2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:33.324: INFO: stderr: "" May 19 00:05:33.324: INFO: stdout: "true" May 19 00:05:33.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8plz2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:33.445: INFO: stderr: "" May 19 00:05:33.445: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:05:33.445: INFO: validating pod update-demo-nautilus-8plz2 May 19 00:05:33.449: INFO: got data: { "image": "nautilus.jpg" } May 19 00:05:33.449: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:05:33.449: INFO: update-demo-nautilus-8plz2 is verified up and running May 19 00:05:33.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:33.540: INFO: stderr: "" May 19 00:05:33.540: INFO: stdout: "true" May 19 00:05:33.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvf6c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3563' May 19 00:05:33.642: INFO: stderr: "" May 19 00:05:33.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:05:33.642: INFO: validating pod update-demo-nautilus-nvf6c May 19 00:05:33.645: INFO: got data: { "image": "nautilus.jpg" } May 19 00:05:33.645: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:05:33.645: INFO: update-demo-nautilus-nvf6c is verified up and running STEP: using delete to clean up resources May 19 00:05:33.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3563' May 19 00:05:33.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 00:05:33.753: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 00:05:33.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3563' May 19 00:05:33.859: INFO: stderr: "No resources found in kubectl-3563 namespace.\n" May 19 00:05:33.859: INFO: stdout: "" May 19 00:05:33.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3563 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 00:05:33.955: INFO: stderr: "" May 19 00:05:33.955: INFO: stdout: "update-demo-nautilus-8plz2\nupdate-demo-nautilus-nvf6c\n" May 19 00:05:34.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3563' May 19 00:05:34.581: INFO: stderr: "No resources found in kubectl-3563 namespace.\n" May 19 00:05:34.581: INFO: stdout: "" May 19 00:05:34.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3563 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 00:05:34.688: INFO: stderr: "" May 19 00:05:34.688: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:05:34.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3563" for this suite. • [SLOW TEST:28.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":57,"skipped":1121,"failed":0} SSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:05:34.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 19 00:05:34.762: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 19 00:05:34.967: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 19 00:05:34.967: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 19 00:05:35.027: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 19 00:05:35.027: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 19 00:05:35.063: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 19 00:05:35.063: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 19 00:05:42.756: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:05:42.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7295" for this suite. • [SLOW TEST:8.122 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":58,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:05:42.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-18f4000c-1bb9-4e93-97dc-2862210bdfbf in namespace container-probe-3757 May 19 00:05:48.998: INFO: Started pod liveness-18f4000c-1bb9-4e93-97dc-2862210bdfbf in namespace container-probe-3757 STEP: checking the pod's current state and verifying that restartCount is present May 19 00:05:49.002: INFO: Initial restart count of pod liveness-18f4000c-1bb9-4e93-97dc-2862210bdfbf is 0 May 19 00:06:11.268: INFO: Restart count of pod container-probe-3757/liveness-18f4000c-1bb9-4e93-97dc-2862210bdfbf is now 1 (22.265514746s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:06:11.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3757" for this suite. • [SLOW TEST:28.518 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":1157,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:06:11.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:06:12.585: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:06:14.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443572, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443572, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443572, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443572, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:06:17.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:06:17.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7844-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:06:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2605" for this suite. STEP: Destroying namespace "webhook-2605-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.603 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":60,"skipped":1164,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:06:18.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 19 00:06:19.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2533' May 19 00:06:19.742: INFO: stderr: "" May 19 00:06:19.742: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 19 00:06:19.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2533' May 19 00:06:19.871: INFO: stderr: "" May 19 00:06:19.871: INFO: stdout: "update-demo-nautilus-798sj update-demo-nautilus-q9whk " May 19 00:06:19.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-798sj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2533' May 19 00:06:20.213: INFO: stderr: "" May 19 00:06:20.213: INFO: stdout: "" May 19 00:06:20.213: INFO: update-demo-nautilus-798sj is created but not running May 19 00:06:25.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2533' May 19 00:06:25.332: INFO: stderr: "" May 19 00:06:25.332: INFO: stdout: "update-demo-nautilus-798sj update-demo-nautilus-q9whk " May 19 00:06:25.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-798sj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2533' May 19 00:06:25.503: INFO: stderr: "" May 19 00:06:25.503: INFO: stdout: "true" May 19 00:06:25.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-798sj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2533' May 19 00:06:25.597: INFO: stderr: "" May 19 00:06:25.597: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:06:25.597: INFO: validating pod update-demo-nautilus-798sj May 19 00:06:25.601: INFO: got data: { "image": "nautilus.jpg" } May 19 00:06:25.601: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:06:25.601: INFO: update-demo-nautilus-798sj is verified up and running May 19 00:06:25.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9whk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2533' May 19 00:06:25.704: INFO: stderr: "" May 19 00:06:25.704: INFO: stdout: "true" May 19 00:06:25.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9whk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2533' May 19 00:06:25.797: INFO: stderr: "" May 19 00:06:25.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 19 00:06:25.797: INFO: validating pod update-demo-nautilus-q9whk May 19 00:06:25.801: INFO: got data: { "image": "nautilus.jpg" } May 19 00:06:25.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 19 00:06:25.801: INFO: update-demo-nautilus-q9whk is verified up and running STEP: using delete to clean up resources May 19 00:06:25.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2533' May 19 00:06:25.925: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 00:06:25.925: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 19 00:06:25.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2533' May 19 00:06:26.048: INFO: stderr: "No resources found in kubectl-2533 namespace.\n" May 19 00:06:26.048: INFO: stdout: "" May 19 00:06:26.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2533 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 00:06:26.300: INFO: stderr: "" May 19 00:06:26.300: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:06:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2533" for this suite. • [SLOW TEST:7.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":61,"skipped":1175,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:06:26.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:06:33.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1669" for this suite. • [SLOW TEST:7.340 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":62,"skipped":1182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:06:33.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:06:33.995: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 19 00:06:36.074: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:06:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-406" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":63,"skipped":1211,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:06:37.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 19 00:06:37.845: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 19 00:06:48.694: INFO: >>> kubeConfig: /root/.kube/config May 19 00:06:51.641: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:02.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-805" for this suite. • [SLOW TEST:24.829 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":64,"skipped":1223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:02.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 19 00:07:02.594: INFO: Waiting up to 5m0s for pod "downward-api-62204105-688e-46d6-a319-54cb3e79b214" in namespace "downward-api-109" to be "Succeeded or Failed" May 19 00:07:02.602: INFO: Pod "downward-api-62204105-688e-46d6-a319-54cb3e79b214": Phase="Pending", Reason="", readiness=false. Elapsed: 7.861169ms May 19 00:07:04.606: INFO: Pod "downward-api-62204105-688e-46d6-a319-54cb3e79b214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012321589s May 19 00:07:06.611: INFO: Pod "downward-api-62204105-688e-46d6-a319-54cb3e79b214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016739508s STEP: Saw pod success May 19 00:07:06.611: INFO: Pod "downward-api-62204105-688e-46d6-a319-54cb3e79b214" satisfied condition "Succeeded or Failed" May 19 00:07:06.614: INFO: Trying to get logs from node latest-worker2 pod downward-api-62204105-688e-46d6-a319-54cb3e79b214 container dapi-container: STEP: delete the pod May 19 00:07:06.782: INFO: Waiting for pod downward-api-62204105-688e-46d6-a319-54cb3e79b214 to disappear May 19 00:07:06.794: INFO: Pod downward-api-62204105-688e-46d6-a319-54cb3e79b214 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:06.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-109" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1247,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:06.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 00:07:06.903: INFO: Waiting up to 5m0s for pod "pod-97862d67-f4aa-42b2-9025-b3e19c476c4d" in namespace "emptydir-750" to be "Succeeded or Failed" May 19 00:07:06.907: INFO: Pod "pod-97862d67-f4aa-42b2-9025-b3e19c476c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.595468ms May 19 00:07:08.911: INFO: Pod "pod-97862d67-f4aa-42b2-9025-b3e19c476c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007932917s May 19 00:07:10.916: INFO: Pod "pod-97862d67-f4aa-42b2-9025-b3e19c476c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01239051s STEP: Saw pod success May 19 00:07:10.916: INFO: Pod "pod-97862d67-f4aa-42b2-9025-b3e19c476c4d" satisfied condition "Succeeded or Failed" May 19 00:07:10.919: INFO: Trying to get logs from node latest-worker2 pod pod-97862d67-f4aa-42b2-9025-b3e19c476c4d container test-container: STEP: delete the pod May 19 00:07:10.939: INFO: Waiting for pod pod-97862d67-f4aa-42b2-9025-b3e19c476c4d to disappear May 19 00:07:10.961: INFO: Pod pod-97862d67-f4aa-42b2-9025-b3e19c476c4d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:10.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-750" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":66,"skipped":1248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:10.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:07:11.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 19 00:07:11.230: INFO: stderr: "" May 19 00:07:11.230: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:11.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5457" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":67,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:11.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:07:11.319: INFO: Creating ReplicaSet my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd May 19 00:07:11.338: INFO: Pod name my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd: Found 0 pods out of 1 May 19 00:07:16.351: INFO: Pod name my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd: Found 1 pods out of 1 May 19 00:07:16.351: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd" is running May 19 00:07:16.356: INFO: Pod "my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd-zclqh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:07:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:07:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:07:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:07:11 +0000 UTC Reason: Message:}]) May 19 00:07:16.357: INFO: Trying to dial the pod May 19 00:07:21.370: INFO: Controller my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd: Got expected result from replica 1 [my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd-zclqh]: "my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd-zclqh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:21.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8836" for this suite. • [SLOW TEST:10.140 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":68,"skipped":1333,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:21.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 19 00:07:21.472: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 00:07:21.491: INFO: Waiting for terminating namespaces to be deleted... May 19 00:07:21.494: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 19 00:07:21.500: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 19 00:07:21.500: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 19 00:07:21.500: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 19 00:07:21.500: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 19 00:07:21.500: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 00:07:21.500: INFO: Container kindnet-cni ready: true, restart count 0 May 19 00:07:21.500: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 00:07:21.500: INFO: Container kube-proxy ready: true, restart count 0 May 19 00:07:21.500: INFO: my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd-zclqh from replicaset-8836 started at 2020-05-19 00:07:11 +0000 UTC (1 container statuses recorded) May 19 00:07:21.500: INFO: Container my-hostname-basic-83b24691-07c2-4453-8489-e4ee6ade23dd ready: true, restart count 0 May 19 00:07:21.500: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 19 00:07:21.505: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 19 00:07:21.505: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 19 00:07:21.505: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 19 00:07:21.505: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 19 00:07:21.505: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 00:07:21.505: INFO: Container kindnet-cni ready: true, restart count 0 May 19 00:07:21.505: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 00:07:21.505: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4c2756d9-7577-465d-bf54-db3ef8cb2b11 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4c2756d9-7577-465d-bf54-db3ef8cb2b11 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4c2756d9-7577-465d-bf54-db3ef8cb2b11 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:07:29.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-351" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.294 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":69,"skipped":1343,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:07:29.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4951 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4951;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4951 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4951;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4951.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4951.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4951.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4951.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4951.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4951.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4951.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.138.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.138.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.138.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.138.71_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4951 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4951;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4951 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4951;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4951.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4951.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4951.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4951.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4951.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4951.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4951.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4951.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4951.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.138.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.138.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.138.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.138.71_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:07:35.872: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.879: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.883: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.886: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.888: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.897: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.912: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.914: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.916: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.918: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.920: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.924: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.927: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:35.944: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:07:40.948: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.951: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.954: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.957: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.965: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.981: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.983: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.986: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.992: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.995: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:40.997: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:41.000: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:41.019: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:07:45.948: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.952: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.956: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.959: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.962: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.967: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.988: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.990: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.993: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:45.999: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:46.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:46.004: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:46.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:46.026: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:07:50.950: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.954: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.964: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.971: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.988: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.991: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.994: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:50.999: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:51.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:51.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:51.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:51.025: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:07:55.948: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.951: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.960: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.965: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.984: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.987: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.989: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.995: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:55.998: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:56.001: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:56.004: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:07:56.019: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:08:00.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.981: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.983: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:00.985: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.012: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.015: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.019: INFO: Unable to read jessie_udp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951 from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.024: INFO: Unable to read jessie_udp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.027: INFO: Unable to read jessie_tcp@dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.030: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc from pod dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c: the server could not find the requested resource (get pods dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c) May 19 00:08:01.046: INFO: Lookups using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4951 wheezy_tcp@dns-test-service.dns-4951 wheezy_udp@dns-test-service.dns-4951.svc wheezy_tcp@dns-test-service.dns-4951.svc wheezy_udp@_http._tcp.dns-test-service.dns-4951.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4951.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4951 jessie_tcp@dns-test-service.dns-4951 jessie_udp@dns-test-service.dns-4951.svc jessie_tcp@dns-test-service.dns-4951.svc jessie_udp@_http._tcp.dns-test-service.dns-4951.svc jessie_tcp@_http._tcp.dns-test-service.dns-4951.svc] May 19 00:08:06.026: INFO: DNS probes using dns-4951/dns-test-d4142bd4-d65a-4950-bf7a-cad161bab33c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:06.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4951" for this suite. • [SLOW TEST:37.189 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":70,"skipped":1350,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:06.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:08:07.136: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fa1b710f-9fab-4bca-90bb-cbc2f262eea6", Controller:(*bool)(0xc0032b6402), BlockOwnerDeletion:(*bool)(0xc0032b6403)}} May 19 00:08:07.168: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"de9164a7-0174-4805-9197-4961e6e67325", Controller:(*bool)(0xc00330eae2), BlockOwnerDeletion:(*bool)(0xc00330eae3)}} May 19 00:08:07.185: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5f31d4e2-b419-4f37-9d2a-af0ac155658a", Controller:(*bool)(0xc0032b66aa), BlockOwnerDeletion:(*bool)(0xc0032b66ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:12.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6859" for this suite. • [SLOW TEST:5.563 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":71,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:12.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 00:08:13.065: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 00:08:15.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443693, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443693, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443693, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443693, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:08:18.261: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:08:18.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:19.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1720" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.144 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":72,"skipped":1379,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:19.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:08:20.777: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:08:22.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443700, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443700, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443700, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725443700, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:08:25.823: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:08:25.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5567-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:27.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-742" for this suite. STEP: Destroying namespace "webhook-742-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":73,"skipped":1379,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:27.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 19 00:08:27.296: INFO: Waiting up to 5m0s for pod "var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d" in namespace "var-expansion-8451" to be "Succeeded or Failed" May 19 00:08:27.519: INFO: Pod "var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 223.2184ms May 19 00:08:29.522: INFO: Pod "var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226371122s May 19 00:08:31.526: INFO: Pod "var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230396254s STEP: Saw pod success May 19 00:08:31.526: INFO: Pod "var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d" satisfied condition "Succeeded or Failed" May 19 00:08:31.529: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d container dapi-container: STEP: delete the pod May 19 00:08:31.552: INFO: Waiting for pod var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d to disappear May 19 00:08:31.603: INFO: Pod var-expansion-9df603df-3ed7-4d23-af80-ccfcdfb16e9d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8451" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":74,"skipped":1379,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:31.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-fnm6 STEP: Creating a pod to test atomic-volume-subpath May 19 00:08:31.735: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fnm6" in namespace "subpath-119" to be "Succeeded or Failed" May 19 00:08:31.749: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.680737ms May 19 00:08:33.753: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017727561s May 19 00:08:35.757: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 4.022280726s May 19 00:08:37.762: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 6.027324112s May 19 00:08:39.767: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 8.031917955s May 19 00:08:41.772: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 10.036661154s May 19 00:08:43.776: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 12.041007066s May 19 00:08:45.780: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 14.045198427s May 19 00:08:47.785: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 16.050000398s May 19 00:08:49.789: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 18.054289152s May 19 00:08:51.794: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 20.059047141s May 19 00:08:53.799: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Running", Reason="", readiness=true. Elapsed: 22.064022348s May 19 00:08:55.804: INFO: Pod "pod-subpath-test-downwardapi-fnm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068985318s STEP: Saw pod success May 19 00:08:55.804: INFO: Pod "pod-subpath-test-downwardapi-fnm6" satisfied condition "Succeeded or Failed" May 19 00:08:55.808: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-fnm6 container test-container-subpath-downwardapi-fnm6: STEP: delete the pod May 19 00:08:56.001: INFO: Waiting for pod pod-subpath-test-downwardapi-fnm6 to disappear May 19 00:08:56.025: INFO: Pod pod-subpath-test-downwardapi-fnm6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-fnm6 May 19 00:08:56.025: INFO: Deleting pod "pod-subpath-test-downwardapi-fnm6" in namespace "subpath-119" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:08:56.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-119" for this suite. • [SLOW TEST:24.424 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":75,"skipped":1391,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:08:56.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 19 00:08:56.089: INFO: >>> kubeConfig: /root/.kube/config May 19 00:08:59.039: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:09:09.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3291" for this suite. • [SLOW TEST:13.829 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":76,"skipped":1407,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:09:09.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:09:09.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7290" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":77,"skipped":1417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:09:09.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:09:10.052: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 00:09:12.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9286 create -f -' May 19 00:09:15.475: INFO: stderr: "" May 19 00:09:15.475: INFO: stdout: "e2e-test-crd-publish-openapi-8318-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 19 00:09:15.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9286 delete e2e-test-crd-publish-openapi-8318-crds test-cr' May 19 00:09:15.607: INFO: stderr: "" May 19 00:09:15.607: INFO: stdout: "e2e-test-crd-publish-openapi-8318-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 19 00:09:15.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9286 apply -f -' May 19 00:09:15.866: INFO: stderr: "" May 19 00:09:15.866: INFO: stdout: "e2e-test-crd-publish-openapi-8318-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 19 00:09:15.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9286 delete e2e-test-crd-publish-openapi-8318-crds test-cr' May 19 00:09:15.975: INFO: stderr: "" May 19 00:09:15.975: INFO: stdout: "e2e-test-crd-publish-openapi-8318-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 19 00:09:15.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8318-crds' May 19 00:09:16.215: INFO: stderr: "" May 19 00:09:16.215: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8318-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:09:19.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9286" for this suite. • [SLOW TEST:9.214 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":78,"skipped":1464,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:09:19.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 00:09:19.269: INFO: Waiting up to 5m0s for pod "pod-99ebeee5-252b-442f-be55-bd6945814956" in namespace "emptydir-7037" to be "Succeeded or Failed" May 19 00:09:19.328: INFO: Pod "pod-99ebeee5-252b-442f-be55-bd6945814956": Phase="Pending", Reason="", readiness=false. Elapsed: 59.701382ms May 19 00:09:21.343: INFO: Pod "pod-99ebeee5-252b-442f-be55-bd6945814956": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074821813s May 19 00:09:23.347: INFO: Pod "pod-99ebeee5-252b-442f-be55-bd6945814956": Phase="Running", Reason="", readiness=true. Elapsed: 4.078759778s May 19 00:09:25.358: INFO: Pod "pod-99ebeee5-252b-442f-be55-bd6945814956": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.089405146s STEP: Saw pod success May 19 00:09:25.358: INFO: Pod "pod-99ebeee5-252b-442f-be55-bd6945814956" satisfied condition "Succeeded or Failed" May 19 00:09:25.360: INFO: Trying to get logs from node latest-worker2 pod pod-99ebeee5-252b-442f-be55-bd6945814956 container test-container: STEP: delete the pod May 19 00:09:25.392: INFO: Waiting for pod pod-99ebeee5-252b-442f-be55-bd6945814956 to disappear May 19 00:09:25.415: INFO: Pod pod-99ebeee5-252b-442f-be55-bd6945814956 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:09:25.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7037" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1466,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:09:25.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2731 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2731 I0519 00:09:25.651932 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2731, replica count: 2 I0519 00:09:28.702323 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:09:31.702597 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:09:31.702: INFO: Creating new exec pod May 19 00:09:36.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2731 execpod9zcdv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 19 00:09:37.019: INFO: stderr: "I0519 00:09:36.897512 1750 log.go:172] (0xc000af5550) (0xc000a886e0) Create stream\nI0519 00:09:36.897564 1750 log.go:172] (0xc000af5550) (0xc000a886e0) Stream added, broadcasting: 1\nI0519 00:09:36.901846 1750 log.go:172] (0xc000af5550) Reply frame received for 1\nI0519 00:09:36.901892 1750 log.go:172] (0xc000af5550) (0xc000456280) Create stream\nI0519 00:09:36.901903 1750 log.go:172] (0xc000af5550) (0xc000456280) Stream added, broadcasting: 3\nI0519 00:09:36.902872 1750 log.go:172] (0xc000af5550) Reply frame received for 3\nI0519 00:09:36.902896 1750 log.go:172] (0xc000af5550) (0xc000436dc0) Create stream\nI0519 00:09:36.902902 1750 log.go:172] (0xc000af5550) (0xc000436dc0) Stream added, broadcasting: 5\nI0519 00:09:36.903807 1750 log.go:172] (0xc000af5550) Reply frame received for 5\nI0519 00:09:36.988665 1750 log.go:172] (0xc000af5550) Data frame received for 5\nI0519 00:09:36.988685 1750 log.go:172] (0xc000436dc0) (5) Data frame handling\nI0519 00:09:36.988698 1750 log.go:172] (0xc000436dc0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0519 00:09:37.011400 1750 log.go:172] (0xc000af5550) Data frame received for 3\nI0519 00:09:37.011433 1750 log.go:172] (0xc000456280) (3) Data frame handling\nI0519 00:09:37.011469 1750 log.go:172] (0xc000af5550) Data frame received for 5\nI0519 00:09:37.011493 1750 log.go:172] (0xc000436dc0) (5) Data frame handling\nI0519 00:09:37.011506 1750 log.go:172] (0xc000436dc0) (5) Data frame sent\nI0519 00:09:37.011514 1750 log.go:172] (0xc000af5550) Data frame received for 5\nI0519 00:09:37.011524 1750 log.go:172] (0xc000436dc0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0519 00:09:37.013292 1750 log.go:172] (0xc000af5550) Data frame received for 1\nI0519 00:09:37.013421 1750 log.go:172] (0xc000a886e0) (1) Data frame handling\nI0519 00:09:37.013452 1750 log.go:172] (0xc000a886e0) (1) Data frame sent\nI0519 00:09:37.013471 1750 log.go:172] (0xc000af5550) (0xc000a886e0) Stream removed, broadcasting: 1\nI0519 00:09:37.013493 1750 log.go:172] (0xc000af5550) Go away received\nI0519 00:09:37.013893 1750 log.go:172] (0xc000af5550) (0xc000a886e0) Stream removed, broadcasting: 1\nI0519 00:09:37.013914 1750 log.go:172] (0xc000af5550) (0xc000456280) Stream removed, broadcasting: 3\nI0519 00:09:37.013924 1750 log.go:172] (0xc000af5550) (0xc000436dc0) Stream removed, broadcasting: 5\n" May 19 00:09:37.019: INFO: stdout: "" May 19 00:09:37.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2731 execpod9zcdv -- /bin/sh -x -c nc -zv -t -w 2 10.111.195.206 80' May 19 00:09:37.228: INFO: stderr: "I0519 00:09:37.148185 1772 log.go:172] (0xc000a340b0) (0xc000139cc0) Create stream\nI0519 00:09:37.148249 1772 log.go:172] (0xc000a340b0) (0xc000139cc0) Stream added, broadcasting: 1\nI0519 00:09:37.151379 1772 log.go:172] (0xc000a340b0) Reply frame received for 1\nI0519 00:09:37.151423 1772 log.go:172] (0xc000a340b0) (0xc0003fc000) Create stream\nI0519 00:09:37.151437 1772 log.go:172] (0xc000a340b0) (0xc0003fc000) Stream added, broadcasting: 3\nI0519 00:09:37.152578 1772 log.go:172] (0xc000a340b0) Reply frame received for 3\nI0519 00:09:37.152614 1772 log.go:172] (0xc000a340b0) (0xc000666140) Create stream\nI0519 00:09:37.152627 1772 log.go:172] (0xc000a340b0) (0xc000666140) Stream added, broadcasting: 5\nI0519 00:09:37.153928 1772 log.go:172] (0xc000a340b0) Reply frame received for 5\nI0519 00:09:37.218556 1772 log.go:172] (0xc000a340b0) Data frame received for 5\nI0519 00:09:37.218586 1772 log.go:172] (0xc000666140) (5) Data frame handling\nI0519 00:09:37.218626 1772 log.go:172] (0xc000666140) (5) Data frame sent\n+ nc -zv -t -w 2 10.111.195.206 80\nI0519 00:09:37.220628 1772 log.go:172] (0xc000a340b0) Data frame received for 5\nI0519 00:09:37.220697 1772 log.go:172] (0xc000666140) (5) Data frame handling\nI0519 00:09:37.220722 1772 log.go:172] (0xc000666140) (5) Data frame sent\nConnection to 10.111.195.206 80 port [tcp/http] succeeded!\nI0519 00:09:37.220989 1772 log.go:172] (0xc000a340b0) Data frame received for 5\nI0519 00:09:37.221006 1772 log.go:172] (0xc000666140) (5) Data frame handling\nI0519 00:09:37.221366 1772 log.go:172] (0xc000a340b0) Data frame received for 3\nI0519 00:09:37.221431 1772 log.go:172] (0xc0003fc000) (3) Data frame handling\nI0519 00:09:37.223331 1772 log.go:172] (0xc000a340b0) Data frame received for 1\nI0519 00:09:37.223348 1772 log.go:172] (0xc000139cc0) (1) Data frame handling\nI0519 00:09:37.223359 1772 log.go:172] (0xc000139cc0) (1) Data frame sent\nI0519 00:09:37.223369 1772 log.go:172] (0xc000a340b0) (0xc000139cc0) Stream removed, broadcasting: 1\nI0519 00:09:37.223453 1772 log.go:172] (0xc000a340b0) Go away received\nI0519 00:09:37.223681 1772 log.go:172] (0xc000a340b0) (0xc000139cc0) Stream removed, broadcasting: 1\nI0519 00:09:37.223697 1772 log.go:172] (0xc000a340b0) (0xc0003fc000) Stream removed, broadcasting: 3\nI0519 00:09:37.223703 1772 log.go:172] (0xc000a340b0) (0xc000666140) Stream removed, broadcasting: 5\n" May 19 00:09:37.228: INFO: stdout: "" May 19 00:09:37.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2731 execpod9zcdv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30932' May 19 00:09:37.508: INFO: stderr: "I0519 00:09:37.364910 1795 log.go:172] (0xc00003a0b0) (0xc00015f180) Create stream\nI0519 00:09:37.365046 1795 log.go:172] (0xc00003a0b0) (0xc00015f180) Stream added, broadcasting: 1\nI0519 00:09:37.366899 1795 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0519 00:09:37.366957 1795 log.go:172] (0xc00003a0b0) (0xc000374140) Create stream\nI0519 00:09:37.366971 1795 log.go:172] (0xc00003a0b0) (0xc000374140) Stream added, broadcasting: 3\nI0519 00:09:37.367874 1795 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0519 00:09:37.367908 1795 log.go:172] (0xc00003a0b0) (0xc000730c80) Create stream\nI0519 00:09:37.367920 1795 log.go:172] (0xc00003a0b0) (0xc000730c80) Stream added, broadcasting: 5\nI0519 00:09:37.368671 1795 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0519 00:09:37.502627 1795 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:09:37.502654 1795 log.go:172] (0xc000730c80) (5) Data frame handling\nI0519 00:09:37.502672 1795 log.go:172] (0xc000730c80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30932\nI0519 00:09:37.502899 1795 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:09:37.502937 1795 log.go:172] (0xc000730c80) (5) Data frame handling\nI0519 00:09:37.502995 1795 log.go:172] (0xc000730c80) (5) Data frame sent\nConnection to 172.17.0.13 30932 port [tcp/30932] succeeded!\nI0519 00:09:37.503138 1795 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:09:37.503182 1795 log.go:172] (0xc000730c80) (5) Data frame handling\nI0519 00:09:37.503351 1795 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0519 00:09:37.503387 1795 log.go:172] (0xc000374140) (3) Data frame handling\nI0519 00:09:37.504341 1795 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0519 00:09:37.504354 1795 log.go:172] (0xc00015f180) (1) Data frame handling\nI0519 00:09:37.504368 1795 log.go:172] (0xc00015f180) (1) Data frame sent\nI0519 00:09:37.504534 1795 log.go:172] (0xc00003a0b0) (0xc00015f180) Stream removed, broadcasting: 1\nI0519 00:09:37.504611 1795 log.go:172] (0xc00003a0b0) Go away received\nI0519 00:09:37.505078 1795 log.go:172] (0xc00003a0b0) (0xc00015f180) Stream removed, broadcasting: 1\nI0519 00:09:37.505097 1795 log.go:172] (0xc00003a0b0) (0xc000374140) Stream removed, broadcasting: 3\nI0519 00:09:37.505107 1795 log.go:172] (0xc00003a0b0) (0xc000730c80) Stream removed, broadcasting: 5\n" May 19 00:09:37.509: INFO: stdout: "" May 19 00:09:37.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2731 execpod9zcdv -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30932' May 19 00:09:37.729: INFO: stderr: "I0519 00:09:37.632558 1817 log.go:172] (0xc000aa1080) (0xc000345f40) Create stream\nI0519 00:09:37.632622 1817 log.go:172] (0xc000aa1080) (0xc000345f40) Stream added, broadcasting: 1\nI0519 00:09:37.635930 1817 log.go:172] (0xc000aa1080) Reply frame received for 1\nI0519 00:09:37.635994 1817 log.go:172] (0xc000aa1080) (0xc000136960) Create stream\nI0519 00:09:37.636017 1817 log.go:172] (0xc000aa1080) (0xc000136960) Stream added, broadcasting: 3\nI0519 00:09:37.637314 1817 log.go:172] (0xc000aa1080) Reply frame received for 3\nI0519 00:09:37.637350 1817 log.go:172] (0xc000aa1080) (0xc00024c460) Create stream\nI0519 00:09:37.637366 1817 log.go:172] (0xc000aa1080) (0xc00024c460) Stream added, broadcasting: 5\nI0519 00:09:37.638464 1817 log.go:172] (0xc000aa1080) Reply frame received for 5\nI0519 00:09:37.721413 1817 log.go:172] (0xc000aa1080) Data frame received for 3\nI0519 00:09:37.721464 1817 log.go:172] (0xc000136960) (3) Data frame handling\nI0519 00:09:37.721501 1817 log.go:172] (0xc000aa1080) Data frame received for 5\nI0519 00:09:37.721515 1817 log.go:172] (0xc00024c460) (5) Data frame handling\nI0519 00:09:37.721540 1817 log.go:172] (0xc00024c460) (5) Data frame sent\nI0519 00:09:37.721599 1817 log.go:172] (0xc000aa1080) Data frame received for 5\nI0519 00:09:37.721627 1817 log.go:172] (0xc00024c460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30932\nConnection to 172.17.0.12 30932 port [tcp/30932] succeeded!\nI0519 00:09:37.723303 1817 log.go:172] (0xc000aa1080) Data frame received for 1\nI0519 00:09:37.723339 1817 log.go:172] (0xc000345f40) (1) Data frame handling\nI0519 00:09:37.723367 1817 log.go:172] (0xc000345f40) (1) Data frame sent\nI0519 00:09:37.723388 1817 log.go:172] (0xc000aa1080) (0xc000345f40) Stream removed, broadcasting: 1\nI0519 00:09:37.723446 1817 log.go:172] (0xc000aa1080) Go away received\nI0519 00:09:37.723870 1817 log.go:172] (0xc000aa1080) (0xc000345f40) Stream removed, broadcasting: 1\nI0519 00:09:37.723901 1817 log.go:172] (0xc000aa1080) (0xc000136960) Stream removed, broadcasting: 3\nI0519 00:09:37.723916 1817 log.go:172] (0xc000aa1080) (0xc00024c460) Stream removed, broadcasting: 5\n" May 19 00:09:37.730: INFO: stdout: "" May 19 00:09:37.730: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:09:37.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2731" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.359 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":80,"skipped":1472,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:09:37.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 19 00:09:41.901: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4675 PodName:var-expansion-877f1729-7bbe-4792-9c77-fdf7012154a5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:09:41.901: INFO: >>> kubeConfig: /root/.kube/config I0519 00:09:41.925638 7 log.go:172] (0xc002900000) (0xc0024905a0) Create stream I0519 00:09:41.925670 7 log.go:172] (0xc002900000) (0xc0024905a0) Stream added, broadcasting: 1 I0519 00:09:41.927842 7 log.go:172] (0xc002900000) Reply frame received for 1 I0519 00:09:41.927898 7 log.go:172] (0xc002900000) (0xc001f36a00) Create stream I0519 00:09:41.927925 7 log.go:172] (0xc002900000) (0xc001f36a00) Stream added, broadcasting: 3 I0519 00:09:41.928838 7 log.go:172] (0xc002900000) Reply frame received for 3 I0519 00:09:41.928865 7 log.go:172] (0xc002900000) (0xc001f36aa0) Create stream I0519 00:09:41.928876 7 log.go:172] (0xc002900000) (0xc001f36aa0) Stream added, broadcasting: 5 I0519 00:09:41.929864 7 log.go:172] (0xc002900000) Reply frame received for 5 I0519 00:09:41.984660 7 log.go:172] (0xc002900000) Data frame received for 5 I0519 00:09:41.984694 7 log.go:172] (0xc001f36aa0) (5) Data frame handling I0519 00:09:41.984715 7 log.go:172] (0xc002900000) Data frame received for 3 I0519 00:09:41.984727 7 log.go:172] (0xc001f36a00) (3) Data frame handling I0519 00:09:41.986652 7 log.go:172] (0xc002900000) Data frame received for 1 I0519 00:09:41.986710 7 log.go:172] (0xc0024905a0) (1) Data frame handling I0519 00:09:41.986724 7 log.go:172] (0xc0024905a0) (1) Data frame sent I0519 00:09:41.986737 7 log.go:172] (0xc002900000) (0xc0024905a0) Stream removed, broadcasting: 1 I0519 00:09:41.986751 7 log.go:172] (0xc002900000) Go away received I0519 00:09:41.987037 7 log.go:172] (0xc002900000) (0xc0024905a0) Stream removed, broadcasting: 1 I0519 00:09:41.987059 7 log.go:172] (0xc002900000) (0xc001f36a00) Stream removed, broadcasting: 3 I0519 00:09:41.987075 7 log.go:172] (0xc002900000) (0xc001f36aa0) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 19 00:09:41.991: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4675 PodName:var-expansion-877f1729-7bbe-4792-9c77-fdf7012154a5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:09:41.991: INFO: >>> kubeConfig: /root/.kube/config I0519 00:09:42.018417 7 log.go:172] (0xc002bee420) (0xc001f36c80) Create stream I0519 00:09:42.018452 7 log.go:172] (0xc002bee420) (0xc001f36c80) Stream added, broadcasting: 1 I0519 00:09:42.020622 7 log.go:172] (0xc002bee420) Reply frame received for 1 I0519 00:09:42.020657 7 log.go:172] (0xc002bee420) (0xc002490780) Create stream I0519 00:09:42.020671 7 log.go:172] (0xc002bee420) (0xc002490780) Stream added, broadcasting: 3 I0519 00:09:42.021859 7 log.go:172] (0xc002bee420) Reply frame received for 3 I0519 00:09:42.021890 7 log.go:172] (0xc002bee420) (0xc001f36dc0) Create stream I0519 00:09:42.021903 7 log.go:172] (0xc002bee420) (0xc001f36dc0) Stream added, broadcasting: 5 I0519 00:09:42.022736 7 log.go:172] (0xc002bee420) Reply frame received for 5 I0519 00:09:42.084184 7 log.go:172] (0xc002bee420) Data frame received for 3 I0519 00:09:42.084236 7 log.go:172] (0xc002490780) (3) Data frame handling I0519 00:09:42.084283 7 log.go:172] (0xc002bee420) Data frame received for 5 I0519 00:09:42.084303 7 log.go:172] (0xc001f36dc0) (5) Data frame handling I0519 00:09:42.085636 7 log.go:172] (0xc002bee420) Data frame received for 1 I0519 00:09:42.085653 7 log.go:172] (0xc001f36c80) (1) Data frame handling I0519 00:09:42.085670 7 log.go:172] (0xc001f36c80) (1) Data frame sent I0519 00:09:42.085808 7 log.go:172] (0xc002bee420) (0xc001f36c80) Stream removed, broadcasting: 1 I0519 00:09:42.085835 7 log.go:172] (0xc002bee420) Go away received I0519 00:09:42.085931 7 log.go:172] (0xc002bee420) (0xc001f36c80) Stream removed, broadcasting: 1 I0519 00:09:42.085947 7 log.go:172] (0xc002bee420) (0xc002490780) Stream removed, broadcasting: 3 I0519 00:09:42.085957 7 log.go:172] (0xc002bee420) (0xc001f36dc0) Stream removed, broadcasting: 5 STEP: updating the annotation value May 19 00:09:42.596: INFO: Successfully updated pod "var-expansion-877f1729-7bbe-4792-9c77-fdf7012154a5" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 19 00:09:42.607: INFO: Deleting pod "var-expansion-877f1729-7bbe-4792-9c77-fdf7012154a5" in namespace "var-expansion-4675" May 19 00:09:42.611: INFO: Wait up to 5m0s for pod "var-expansion-877f1729-7bbe-4792-9c77-fdf7012154a5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4675" for this suite. • [SLOW TEST:40.859 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":81,"skipped":1487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:18.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 19 00:10:18.836: INFO: Waiting up to 5m0s for pod "pod-5dd6212f-ade2-44f1-8dce-0a60d2308789" in namespace "emptydir-4318" to be "Succeeded or Failed" May 19 00:10:18.847: INFO: Pod "pod-5dd6212f-ade2-44f1-8dce-0a60d2308789": Phase="Pending", Reason="", readiness=false. Elapsed: 11.108381ms May 19 00:10:20.852: INFO: Pod "pod-5dd6212f-ade2-44f1-8dce-0a60d2308789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015502484s May 19 00:10:22.886: INFO: Pod "pod-5dd6212f-ade2-44f1-8dce-0a60d2308789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049494351s STEP: Saw pod success May 19 00:10:22.886: INFO: Pod "pod-5dd6212f-ade2-44f1-8dce-0a60d2308789" satisfied condition "Succeeded or Failed" May 19 00:10:22.888: INFO: Trying to get logs from node latest-worker2 pod pod-5dd6212f-ade2-44f1-8dce-0a60d2308789 container test-container: STEP: delete the pod May 19 00:10:22.944: INFO: Waiting for pod pod-5dd6212f-ade2-44f1-8dce-0a60d2308789 to disappear May 19 00:10:23.203: INFO: Pod pod-5dd6212f-ade2-44f1-8dce-0a60d2308789 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:23.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4318" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":82,"skipped":1536,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:23.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:10:23.303: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3170" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":83,"skipped":1548,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:27.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 00:10:35.725: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 00:10:35.751: INFO: Pod pod-with-prestop-http-hook still exists May 19 00:10:37.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 00:10:37.760: INFO: Pod pod-with-prestop-http-hook still exists May 19 00:10:39.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 00:10:39.755: INFO: Pod pod-with-prestop-http-hook still exists May 19 00:10:41.751: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 19 00:10:41.755: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:41.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2379" for this suite. • [SLOW TEST:14.221 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1564,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:41.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 19 00:10:41.839: INFO: Waiting up to 5m0s for pod "client-containers-936208db-9066-4ff5-8471-7ef68a523d94" in namespace "containers-5692" to be "Succeeded or Failed" May 19 00:10:41.864: INFO: Pod "client-containers-936208db-9066-4ff5-8471-7ef68a523d94": Phase="Pending", Reason="", readiness=false. Elapsed: 25.537284ms May 19 00:10:43.916: INFO: Pod "client-containers-936208db-9066-4ff5-8471-7ef68a523d94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077101488s May 19 00:10:45.920: INFO: Pod "client-containers-936208db-9066-4ff5-8471-7ef68a523d94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081384657s STEP: Saw pod success May 19 00:10:45.920: INFO: Pod "client-containers-936208db-9066-4ff5-8471-7ef68a523d94" satisfied condition "Succeeded or Failed" May 19 00:10:45.922: INFO: Trying to get logs from node latest-worker2 pod client-containers-936208db-9066-4ff5-8471-7ef68a523d94 container test-container: STEP: delete the pod May 19 00:10:45.994: INFO: Waiting for pod client-containers-936208db-9066-4ff5-8471-7ef68a523d94 to disappear May 19 00:10:45.996: INFO: Pod client-containers-936208db-9066-4ff5-8471-7ef68a523d94 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:45.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5692" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1564,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:46.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:10:46.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca" in namespace "projected-5168" to be "Succeeded or Failed" May 19 00:10:46.339: INFO: Pod "downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca": Phase="Pending", Reason="", readiness=false. Elapsed: 38.616612ms May 19 00:10:48.343: INFO: Pod "downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042661931s May 19 00:10:50.346: INFO: Pod "downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04623844s STEP: Saw pod success May 19 00:10:50.346: INFO: Pod "downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca" satisfied condition "Succeeded or Failed" May 19 00:10:50.348: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca container client-container: STEP: delete the pod May 19 00:10:50.378: INFO: Waiting for pod downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca to disappear May 19 00:10:50.393: INFO: Pod downwardapi-volume-a50ce134-02d0-413e-8ce3-ca73dd1aabca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:50.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5168" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1585,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:50.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:10:50.492: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:10:51.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7293" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":87,"skipped":1596,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:10:51.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5106 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 19 00:10:51.818: INFO: Found 0 stateful pods, waiting for 3 May 19 00:11:01.823: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:01.823: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:01.823: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 00:11:11.824: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:11.824: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:11.824: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 19 00:11:11.854: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 19 00:11:21.909: INFO: Updating stateful set ss2 May 19 00:11:21.936: INFO: Waiting for Pod statefulset-5106/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 00:11:31.944: INFO: Waiting for Pod statefulset-5106/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 19 00:11:42.480: INFO: Found 2 stateful pods, waiting for 3 May 19 00:11:52.486: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:52.486: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 00:11:52.486: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 19 00:11:52.511: INFO: Updating stateful set ss2 May 19 00:11:52.519: INFO: Waiting for Pod statefulset-5106/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 00:12:02.545: INFO: Updating stateful set ss2 May 19 00:12:02.610: INFO: Waiting for StatefulSet statefulset-5106/ss2 to complete update May 19 00:12:02.610: INFO: Waiting for Pod statefulset-5106/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 19 00:12:12.618: INFO: Waiting for StatefulSet statefulset-5106/ss2 to complete update May 19 00:12:12.618: INFO: Waiting for Pod statefulset-5106/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 00:12:22.617: INFO: Deleting all statefulset in ns statefulset-5106 May 19 00:12:22.619: INFO: Scaling statefulset ss2 to 0 May 19 00:12:52.651: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:12:52.656: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:12:52.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5106" for this suite. • [SLOW TEST:120.998 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":88,"skipped":1597,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:12:52.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 19 00:12:52.766: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1644" to be "Succeeded or Failed" May 19 00:12:52.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 53.778278ms May 19 00:12:54.823: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057400116s May 19 00:12:56.827: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061340306s May 19 00:12:58.832: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065989011s STEP: Saw pod success May 19 00:12:58.832: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 19 00:12:58.835: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 19 00:12:58.984: INFO: Waiting for pod pod-host-path-test to disappear May 19 00:12:59.001: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:12:59.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1644" for this suite. • [SLOW TEST:6.328 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1600,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:12:59.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1802 May 19 00:13:03.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 19 00:13:03.489: INFO: stderr: "I0519 00:13:03.350119 1837 log.go:172] (0xc000aec9a0) (0xc000b786e0) Create stream\nI0519 00:13:03.350185 1837 log.go:172] (0xc000aec9a0) (0xc000b786e0) Stream added, broadcasting: 1\nI0519 00:13:03.370728 1837 log.go:172] (0xc000aec9a0) Reply frame received for 1\nI0519 00:13:03.370768 1837 log.go:172] (0xc000aec9a0) (0xc0003e1540) Create stream\nI0519 00:13:03.370778 1837 log.go:172] (0xc000aec9a0) (0xc0003e1540) Stream added, broadcasting: 3\nI0519 00:13:03.371583 1837 log.go:172] (0xc000aec9a0) Reply frame received for 3\nI0519 00:13:03.371606 1837 log.go:172] (0xc000aec9a0) (0xc000b78780) Create stream\nI0519 00:13:03.371614 1837 log.go:172] (0xc000aec9a0) (0xc000b78780) Stream added, broadcasting: 5\nI0519 00:13:03.372302 1837 log.go:172] (0xc000aec9a0) Reply frame received for 5\nI0519 00:13:03.466053 1837 log.go:172] (0xc000aec9a0) Data frame received for 5\nI0519 00:13:03.466086 1837 log.go:172] (0xc000b78780) (5) Data frame handling\nI0519 00:13:03.466111 1837 log.go:172] (0xc000b78780) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0519 00:13:03.480040 1837 log.go:172] (0xc000aec9a0) Data frame received for 3\nI0519 00:13:03.480061 1837 log.go:172] (0xc0003e1540) (3) Data frame handling\nI0519 00:13:03.480076 1837 log.go:172] (0xc0003e1540) (3) Data frame sent\nI0519 00:13:03.480928 1837 log.go:172] (0xc000aec9a0) Data frame received for 3\nI0519 00:13:03.480949 1837 log.go:172] (0xc0003e1540) (3) Data frame handling\nI0519 00:13:03.481026 1837 log.go:172] (0xc000aec9a0) Data frame received for 5\nI0519 00:13:03.481051 1837 log.go:172] (0xc000b78780) (5) Data frame handling\nI0519 00:13:03.483217 1837 log.go:172] (0xc000aec9a0) Data frame received for 1\nI0519 00:13:03.483233 1837 log.go:172] (0xc000b786e0) (1) Data frame handling\nI0519 00:13:03.483269 1837 log.go:172] (0xc000b786e0) (1) Data frame sent\nI0519 00:13:03.483291 1837 log.go:172] (0xc000aec9a0) (0xc000b786e0) Stream removed, broadcasting: 1\nI0519 00:13:03.483345 1837 log.go:172] (0xc000aec9a0) Go away received\nI0519 00:13:03.483625 1837 log.go:172] (0xc000aec9a0) (0xc000b786e0) Stream removed, broadcasting: 1\nI0519 00:13:03.483646 1837 log.go:172] (0xc000aec9a0) (0xc0003e1540) Stream removed, broadcasting: 3\nI0519 00:13:03.483663 1837 log.go:172] (0xc000aec9a0) (0xc000b78780) Stream removed, broadcasting: 5\n" May 19 00:13:03.490: INFO: stdout: "iptables" May 19 00:13:03.490: INFO: proxyMode: iptables May 19 00:13:03.495: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:13:03.513: INFO: Pod kube-proxy-mode-detector still exists May 19 00:13:05.513: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:13:05.516: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1802 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1802 I0519 00:13:05.571841 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1802, replica count: 3 I0519 00:13:08.622316 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:13:11.622566 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:13:11.634: INFO: Creating new exec pod May 19 00:13:16.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 19 00:13:16.916: INFO: stderr: "I0519 00:13:16.799174 1857 log.go:172] (0xc000a33ad0) (0xc0006f7cc0) Create stream\nI0519 00:13:16.799233 1857 log.go:172] (0xc000a33ad0) (0xc0006f7cc0) Stream added, broadcasting: 1\nI0519 00:13:16.804221 1857 log.go:172] (0xc000a33ad0) Reply frame received for 1\nI0519 00:13:16.804300 1857 log.go:172] (0xc000a33ad0) (0xc000654640) Create stream\nI0519 00:13:16.804324 1857 log.go:172] (0xc000a33ad0) (0xc000654640) Stream added, broadcasting: 3\nI0519 00:13:16.805755 1857 log.go:172] (0xc000a33ad0) Reply frame received for 3\nI0519 00:13:16.805787 1857 log.go:172] (0xc000a33ad0) (0xc000654b40) Create stream\nI0519 00:13:16.805799 1857 log.go:172] (0xc000a33ad0) (0xc000654b40) Stream added, broadcasting: 5\nI0519 00:13:16.806711 1857 log.go:172] (0xc000a33ad0) Reply frame received for 5\nI0519 00:13:16.893008 1857 log.go:172] (0xc000a33ad0) Data frame received for 5\nI0519 00:13:16.893041 1857 log.go:172] (0xc000654b40) (5) Data frame handling\nI0519 00:13:16.893068 1857 log.go:172] (0xc000654b40) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0519 00:13:16.906870 1857 log.go:172] (0xc000a33ad0) Data frame received for 5\nI0519 00:13:16.906894 1857 log.go:172] (0xc000654b40) (5) Data frame handling\nI0519 00:13:16.906919 1857 log.go:172] (0xc000654b40) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0519 00:13:16.907510 1857 log.go:172] (0xc000a33ad0) Data frame received for 3\nI0519 00:13:16.907549 1857 log.go:172] (0xc000654640) (3) Data frame handling\nI0519 00:13:16.907663 1857 log.go:172] (0xc000a33ad0) Data frame received for 5\nI0519 00:13:16.907693 1857 log.go:172] (0xc000654b40) (5) Data frame handling\nI0519 00:13:16.909790 1857 log.go:172] (0xc000a33ad0) Data frame received for 1\nI0519 00:13:16.909820 1857 log.go:172] (0xc0006f7cc0) (1) Data frame handling\nI0519 00:13:16.909834 1857 log.go:172] (0xc0006f7cc0) (1) Data frame sent\nI0519 00:13:16.909853 1857 log.go:172] (0xc000a33ad0) (0xc0006f7cc0) Stream removed, broadcasting: 1\nI0519 00:13:16.909883 1857 log.go:172] (0xc000a33ad0) Go away received\nI0519 00:13:16.910248 1857 log.go:172] (0xc000a33ad0) (0xc0006f7cc0) Stream removed, broadcasting: 1\nI0519 00:13:16.910272 1857 log.go:172] (0xc000a33ad0) (0xc000654640) Stream removed, broadcasting: 3\nI0519 00:13:16.910291 1857 log.go:172] (0xc000a33ad0) (0xc000654b40) Stream removed, broadcasting: 5\n" May 19 00:13:16.916: INFO: stdout: "" May 19 00:13:16.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c nc -zv -t -w 2 10.104.71.189 80' May 19 00:13:17.116: INFO: stderr: "I0519 00:13:17.047223 1877 log.go:172] (0xc00003a8f0) (0xc0004c81e0) Create stream\nI0519 00:13:17.047306 1877 log.go:172] (0xc00003a8f0) (0xc0004c81e0) Stream added, broadcasting: 1\nI0519 00:13:17.050053 1877 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0519 00:13:17.050087 1877 log.go:172] (0xc00003a8f0) (0xc0004c9180) Create stream\nI0519 00:13:17.050097 1877 log.go:172] (0xc00003a8f0) (0xc0004c9180) Stream added, broadcasting: 3\nI0519 00:13:17.051238 1877 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0519 00:13:17.051271 1877 log.go:172] (0xc00003a8f0) (0xc000384d20) Create stream\nI0519 00:13:17.051283 1877 log.go:172] (0xc00003a8f0) (0xc000384d20) Stream added, broadcasting: 5\nI0519 00:13:17.052308 1877 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0519 00:13:17.109677 1877 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0519 00:13:17.109708 1877 log.go:172] (0xc000384d20) (5) Data frame handling\nI0519 00:13:17.109725 1877 log.go:172] (0xc000384d20) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.71.189 80\nConnection to 10.104.71.189 80 port [tcp/http] succeeded!\nI0519 00:13:17.109874 1877 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0519 00:13:17.109913 1877 log.go:172] (0xc0004c9180) (3) Data frame handling\nI0519 00:13:17.109960 1877 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0519 00:13:17.110017 1877 log.go:172] (0xc000384d20) (5) Data frame handling\nI0519 00:13:17.110792 1877 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0519 00:13:17.110812 1877 log.go:172] (0xc0004c81e0) (1) Data frame handling\nI0519 00:13:17.110823 1877 log.go:172] (0xc0004c81e0) (1) Data frame sent\nI0519 00:13:17.110839 1877 log.go:172] (0xc00003a8f0) (0xc0004c81e0) Stream removed, broadcasting: 1\nI0519 00:13:17.110892 1877 log.go:172] (0xc00003a8f0) Go away received\nI0519 00:13:17.111180 1877 log.go:172] (0xc00003a8f0) (0xc0004c81e0) Stream removed, broadcasting: 1\nI0519 00:13:17.111197 1877 log.go:172] (0xc00003a8f0) (0xc0004c9180) Stream removed, broadcasting: 3\nI0519 00:13:17.111209 1877 log.go:172] (0xc00003a8f0) (0xc000384d20) Stream removed, broadcasting: 5\n" May 19 00:13:17.116: INFO: stdout: "" May 19 00:13:17.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32028' May 19 00:13:17.348: INFO: stderr: "I0519 00:13:17.261510 1897 log.go:172] (0xc000b56000) (0xc0004f6000) Create stream\nI0519 00:13:17.261577 1897 log.go:172] (0xc000b56000) (0xc0004f6000) Stream added, broadcasting: 1\nI0519 00:13:17.263283 1897 log.go:172] (0xc000b56000) Reply frame received for 1\nI0519 00:13:17.263328 1897 log.go:172] (0xc000b56000) (0xc00061ee60) Create stream\nI0519 00:13:17.263354 1897 log.go:172] (0xc000b56000) (0xc00061ee60) Stream added, broadcasting: 3\nI0519 00:13:17.264292 1897 log.go:172] (0xc000b56000) Reply frame received for 3\nI0519 00:13:17.264320 1897 log.go:172] (0xc000b56000) (0xc000268460) Create stream\nI0519 00:13:17.264331 1897 log.go:172] (0xc000b56000) (0xc000268460) Stream added, broadcasting: 5\nI0519 00:13:17.265283 1897 log.go:172] (0xc000b56000) Reply frame received for 5\nI0519 00:13:17.326184 1897 log.go:172] (0xc000b56000) Data frame received for 3\nI0519 00:13:17.326213 1897 log.go:172] (0xc00061ee60) (3) Data frame handling\nI0519 00:13:17.331272 1897 log.go:172] (0xc000b56000) Data frame received for 5\nI0519 00:13:17.331300 1897 log.go:172] (0xc000268460) (5) Data frame handling\nI0519 00:13:17.331316 1897 log.go:172] (0xc000268460) (5) Data frame sent\nI0519 00:13:17.331327 1897 log.go:172] (0xc000b56000) Data frame received for 5\nI0519 00:13:17.331337 1897 log.go:172] (0xc000268460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 32028\nConnection to 172.17.0.13 32028 port [tcp/32028] succeeded!\nI0519 00:13:17.344403 1897 log.go:172] (0xc000b56000) Data frame received for 1\nI0519 00:13:17.344433 1897 log.go:172] (0xc0004f6000) (1) Data frame handling\nI0519 00:13:17.344443 1897 log.go:172] (0xc0004f6000) (1) Data frame sent\nI0519 00:13:17.344454 1897 log.go:172] (0xc000b56000) (0xc0004f6000) Stream removed, broadcasting: 1\nI0519 00:13:17.344472 1897 log.go:172] (0xc000b56000) Go away received\nI0519 00:13:17.344857 1897 log.go:172] (0xc000b56000) (0xc0004f6000) Stream removed, broadcasting: 1\nI0519 00:13:17.344878 1897 log.go:172] (0xc000b56000) (0xc00061ee60) Stream removed, broadcasting: 3\nI0519 00:13:17.344888 1897 log.go:172] (0xc000b56000) (0xc000268460) Stream removed, broadcasting: 5\n" May 19 00:13:17.348: INFO: stdout: "" May 19 00:13:17.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32028' May 19 00:13:17.541: INFO: stderr: "I0519 00:13:17.473505 1918 log.go:172] (0xc00003b1e0) (0xc000137c20) Create stream\nI0519 00:13:17.473563 1918 log.go:172] (0xc00003b1e0) (0xc000137c20) Stream added, broadcasting: 1\nI0519 00:13:17.475635 1918 log.go:172] (0xc00003b1e0) Reply frame received for 1\nI0519 00:13:17.475677 1918 log.go:172] (0xc00003b1e0) (0xc00051e140) Create stream\nI0519 00:13:17.475692 1918 log.go:172] (0xc00003b1e0) (0xc00051e140) Stream added, broadcasting: 3\nI0519 00:13:17.476408 1918 log.go:172] (0xc00003b1e0) Reply frame received for 3\nI0519 00:13:17.476448 1918 log.go:172] (0xc00003b1e0) (0xc000322e60) Create stream\nI0519 00:13:17.476463 1918 log.go:172] (0xc00003b1e0) (0xc000322e60) Stream added, broadcasting: 5\nI0519 00:13:17.477598 1918 log.go:172] (0xc00003b1e0) Reply frame received for 5\nI0519 00:13:17.531921 1918 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0519 00:13:17.532043 1918 log.go:172] (0xc000322e60) (5) Data frame handling\nI0519 00:13:17.532105 1918 log.go:172] (0xc000322e60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32028\nI0519 00:13:17.534005 1918 log.go:172] (0xc00003b1e0) Data frame received for 3\nI0519 00:13:17.534063 1918 log.go:172] (0xc00051e140) (3) Data frame handling\nI0519 00:13:17.534312 1918 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0519 00:13:17.534362 1918 log.go:172] (0xc000322e60) (5) Data frame handling\nI0519 00:13:17.534402 1918 log.go:172] (0xc000322e60) (5) Data frame sent\nI0519 00:13:17.534438 1918 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0519 00:13:17.534456 1918 log.go:172] (0xc000322e60) (5) Data frame handling\nConnection to 172.17.0.12 32028 port [tcp/32028] succeeded!\nI0519 00:13:17.535994 1918 log.go:172] (0xc00003b1e0) Data frame received for 1\nI0519 00:13:17.536009 1918 log.go:172] (0xc000137c20) (1) Data frame handling\nI0519 00:13:17.536024 1918 log.go:172] (0xc000137c20) (1) Data frame sent\nI0519 00:13:17.536041 1918 log.go:172] (0xc00003b1e0) (0xc000137c20) Stream removed, broadcasting: 1\nI0519 00:13:17.536153 1918 log.go:172] (0xc00003b1e0) Go away received\nI0519 00:13:17.536402 1918 log.go:172] (0xc00003b1e0) (0xc000137c20) Stream removed, broadcasting: 1\nI0519 00:13:17.536417 1918 log.go:172] (0xc00003b1e0) (0xc00051e140) Stream removed, broadcasting: 3\nI0519 00:13:17.536426 1918 log.go:172] (0xc00003b1e0) (0xc000322e60) Stream removed, broadcasting: 5\n" May 19 00:13:17.541: INFO: stdout: "" May 19 00:13:17.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:32028/ ; done' May 19 00:13:17.822: INFO: stderr: "I0519 00:13:17.665449 1938 log.go:172] (0xc000ae3130) (0xc000834f00) Create stream\nI0519 00:13:17.665526 1938 log.go:172] (0xc000ae3130) (0xc000834f00) Stream added, broadcasting: 1\nI0519 00:13:17.669752 1938 log.go:172] (0xc000ae3130) Reply frame received for 1\nI0519 00:13:17.669785 1938 log.go:172] (0xc000ae3130) (0xc000827b80) Create stream\nI0519 00:13:17.669794 1938 log.go:172] (0xc000ae3130) (0xc000827b80) Stream added, broadcasting: 3\nI0519 00:13:17.670835 1938 log.go:172] (0xc000ae3130) Reply frame received for 3\nI0519 00:13:17.670912 1938 log.go:172] (0xc000ae3130) (0xc0006a01e0) Create stream\nI0519 00:13:17.670937 1938 log.go:172] (0xc000ae3130) (0xc0006a01e0) Stream added, broadcasting: 5\nI0519 00:13:17.671814 1938 log.go:172] (0xc000ae3130) Reply frame received for 5\nI0519 00:13:17.731431 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.731453 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.731475 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.731501 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.731509 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.731535 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.740473 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.740499 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.740520 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.740692 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.740707 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.740722 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.746280 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.746307 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.746318 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.746327 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.746332 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.746337 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.750017 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.750031 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.750042 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.750529 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.750547 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.750554 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.750562 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.750567 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.750576 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.754278 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.754295 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.754309 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.754931 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.754955 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.754967 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.754979 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.754987 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.755007 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.758412 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.758425 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.758435 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.758725 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.758745 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.758756 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.758771 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.758779 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.758797 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.762439 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.762464 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.762476 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.762919 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.762940 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.762947 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.762956 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.762960 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.762968 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.766227 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.766241 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.766251 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.766599 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.766611 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.766619 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.766636 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.766642 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.766647 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.771746 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.771767 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.771786 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.772133 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.772151 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.772159 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.772170 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.772177 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.772183 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.776954 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.776969 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.776982 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.777664 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.777690 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.777702 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\nI0519 00:13:17.777715 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.777725 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.777735 1938 log.go:172] (0xc000827b80) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.783450 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.783639 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.783735 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.784021 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.784055 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.784072 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ I0519 00:13:17.784102 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.784126 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.784150 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.784326 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.784355 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.784379 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\ncurlI0519 00:13:17.784519 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.784535 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.784547 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n -qI0519 00:13:17.784786 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.784799 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.784811 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.789078 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.789254 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.789283 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.789868 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.789881 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.789889 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.789934 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.789948 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.789954 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.795162 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.795179 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.795191 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.795626 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.795645 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.795658 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\nI0519 00:13:17.795666 1938 log.go:172] (0xc000ae3130) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0519 00:13:17.795680 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.795693 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.795717 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.795735 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.795752 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n http://172.17.0.13:32028/\nI0519 00:13:17.799837 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.799853 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.799866 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.800593 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.800620 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.800630 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.800645 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.800655 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.800664 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.804922 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.804947 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.804964 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.805832 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.805852 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.805870 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.805953 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.805963 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.805972 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.810415 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.810431 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.810441 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.810833 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.810845 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.810851 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\nI0519 00:13:17.810856 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.810860 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:17.810879 1938 log.go:172] (0xc0006a01e0) (5) Data frame sent\nI0519 00:13:17.810898 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.810916 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.810926 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.815285 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.815313 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.815339 1938 log.go:172] (0xc000827b80) (3) Data frame sent\nI0519 00:13:17.815840 1938 log.go:172] (0xc000ae3130) Data frame received for 5\nI0519 00:13:17.815867 1938 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0519 00:13:17.815937 1938 log.go:172] (0xc000ae3130) Data frame received for 3\nI0519 00:13:17.815955 1938 log.go:172] (0xc000827b80) (3) Data frame handling\nI0519 00:13:17.817528 1938 log.go:172] (0xc000ae3130) Data frame received for 1\nI0519 00:13:17.817543 1938 log.go:172] (0xc000834f00) (1) Data frame handling\nI0519 00:13:17.817555 1938 log.go:172] (0xc000834f00) (1) Data frame sent\nI0519 00:13:17.817566 1938 log.go:172] (0xc000ae3130) (0xc000834f00) Stream removed, broadcasting: 1\nI0519 00:13:17.817681 1938 log.go:172] (0xc000ae3130) Go away received\nI0519 00:13:17.818066 1938 log.go:172] (0xc000ae3130) (0xc000834f00) Stream removed, broadcasting: 1\nI0519 00:13:17.818085 1938 log.go:172] (0xc000ae3130) (0xc000827b80) Stream removed, broadcasting: 3\nI0519 00:13:17.818097 1938 log.go:172] (0xc000ae3130) (0xc0006a01e0) Stream removed, broadcasting: 5\n" May 19 00:13:17.823: INFO: stdout: "\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2\naffinity-nodeport-timeout-sl7f2" May 19 00:13:17.823: INFO: Received response from host: May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Received response from host: affinity-nodeport-timeout-sl7f2 May 19 00:13:17.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32028/' May 19 00:13:18.040: INFO: stderr: "I0519 00:13:17.953972 1958 log.go:172] (0xc00098d4a0) (0xc000bf65a0) Create stream\nI0519 00:13:17.954029 1958 log.go:172] (0xc00098d4a0) (0xc000bf65a0) Stream added, broadcasting: 1\nI0519 00:13:17.958626 1958 log.go:172] (0xc00098d4a0) Reply frame received for 1\nI0519 00:13:17.958670 1958 log.go:172] (0xc00098d4a0) (0xc000514320) Create stream\nI0519 00:13:17.958687 1958 log.go:172] (0xc00098d4a0) (0xc000514320) Stream added, broadcasting: 3\nI0519 00:13:17.959686 1958 log.go:172] (0xc00098d4a0) Reply frame received for 3\nI0519 00:13:17.959749 1958 log.go:172] (0xc00098d4a0) (0xc0004f0280) Create stream\nI0519 00:13:17.959763 1958 log.go:172] (0xc00098d4a0) (0xc0004f0280) Stream added, broadcasting: 5\nI0519 00:13:17.960736 1958 log.go:172] (0xc00098d4a0) Reply frame received for 5\nI0519 00:13:18.025647 1958 log.go:172] (0xc00098d4a0) Data frame received for 5\nI0519 00:13:18.025670 1958 log.go:172] (0xc0004f0280) (5) Data frame handling\nI0519 00:13:18.025684 1958 log.go:172] (0xc0004f0280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:18.031877 1958 log.go:172] (0xc00098d4a0) Data frame received for 3\nI0519 00:13:18.031913 1958 log.go:172] (0xc000514320) (3) Data frame handling\nI0519 00:13:18.031947 1958 log.go:172] (0xc000514320) (3) Data frame sent\nI0519 00:13:18.032704 1958 log.go:172] (0xc00098d4a0) Data frame received for 5\nI0519 00:13:18.032743 1958 log.go:172] (0xc0004f0280) (5) Data frame handling\nI0519 00:13:18.032776 1958 log.go:172] (0xc00098d4a0) Data frame received for 3\nI0519 00:13:18.032793 1958 log.go:172] (0xc000514320) (3) Data frame handling\nI0519 00:13:18.034676 1958 log.go:172] (0xc00098d4a0) Data frame received for 1\nI0519 00:13:18.034704 1958 log.go:172] (0xc000bf65a0) (1) Data frame handling\nI0519 00:13:18.034718 1958 log.go:172] (0xc000bf65a0) (1) Data frame sent\nI0519 00:13:18.034732 1958 log.go:172] (0xc00098d4a0) (0xc000bf65a0) Stream removed, broadcasting: 1\nI0519 00:13:18.034793 1958 log.go:172] (0xc00098d4a0) Go away received\nI0519 00:13:18.035263 1958 log.go:172] (0xc00098d4a0) (0xc000bf65a0) Stream removed, broadcasting: 1\nI0519 00:13:18.035306 1958 log.go:172] (0xc00098d4a0) (0xc000514320) Stream removed, broadcasting: 3\nI0519 00:13:18.035339 1958 log.go:172] (0xc00098d4a0) (0xc0004f0280) Stream removed, broadcasting: 5\n" May 19 00:13:18.040: INFO: stdout: "affinity-nodeport-timeout-sl7f2" May 19 00:13:33.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32028/' May 19 00:13:33.272: INFO: stderr: "I0519 00:13:33.178198 1978 log.go:172] (0xc000b9eb00) (0xc00052c820) Create stream\nI0519 00:13:33.178247 1978 log.go:172] (0xc000b9eb00) (0xc00052c820) Stream added, broadcasting: 1\nI0519 00:13:33.180840 1978 log.go:172] (0xc000b9eb00) Reply frame received for 1\nI0519 00:13:33.180885 1978 log.go:172] (0xc000b9eb00) (0xc0002ea0a0) Create stream\nI0519 00:13:33.180903 1978 log.go:172] (0xc000b9eb00) (0xc0002ea0a0) Stream added, broadcasting: 3\nI0519 00:13:33.182085 1978 log.go:172] (0xc000b9eb00) Reply frame received for 3\nI0519 00:13:33.182113 1978 log.go:172] (0xc000b9eb00) (0xc00052cf00) Create stream\nI0519 00:13:33.182121 1978 log.go:172] (0xc000b9eb00) (0xc00052cf00) Stream added, broadcasting: 5\nI0519 00:13:33.182850 1978 log.go:172] (0xc000b9eb00) Reply frame received for 5\nI0519 00:13:33.260077 1978 log.go:172] (0xc000b9eb00) Data frame received for 5\nI0519 00:13:33.260103 1978 log.go:172] (0xc00052cf00) (5) Data frame handling\nI0519 00:13:33.260118 1978 log.go:172] (0xc00052cf00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:33.264058 1978 log.go:172] (0xc000b9eb00) Data frame received for 3\nI0519 00:13:33.264081 1978 log.go:172] (0xc0002ea0a0) (3) Data frame handling\nI0519 00:13:33.264100 1978 log.go:172] (0xc0002ea0a0) (3) Data frame sent\nI0519 00:13:33.264586 1978 log.go:172] (0xc000b9eb00) Data frame received for 5\nI0519 00:13:33.264623 1978 log.go:172] (0xc00052cf00) (5) Data frame handling\nI0519 00:13:33.264825 1978 log.go:172] (0xc000b9eb00) Data frame received for 3\nI0519 00:13:33.264861 1978 log.go:172] (0xc0002ea0a0) (3) Data frame handling\nI0519 00:13:33.267045 1978 log.go:172] (0xc000b9eb00) Data frame received for 1\nI0519 00:13:33.267064 1978 log.go:172] (0xc00052c820) (1) Data frame handling\nI0519 00:13:33.267083 1978 log.go:172] (0xc00052c820) (1) Data frame sent\nI0519 00:13:33.267096 1978 log.go:172] (0xc000b9eb00) (0xc00052c820) Stream removed, broadcasting: 1\nI0519 00:13:33.267119 1978 log.go:172] (0xc000b9eb00) Go away received\nI0519 00:13:33.267429 1978 log.go:172] (0xc000b9eb00) (0xc00052c820) Stream removed, broadcasting: 1\nI0519 00:13:33.267449 1978 log.go:172] (0xc000b9eb00) (0xc0002ea0a0) Stream removed, broadcasting: 3\nI0519 00:13:33.267458 1978 log.go:172] (0xc000b9eb00) (0xc00052cf00) Stream removed, broadcasting: 5\n" May 19 00:13:33.272: INFO: stdout: "affinity-nodeport-timeout-sl7f2" May 19 00:13:48.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32028/' May 19 00:13:48.495: INFO: stderr: "I0519 00:13:48.408431 1998 log.go:172] (0xc00091d3f0) (0xc00081af00) Create stream\nI0519 00:13:48.408488 1998 log.go:172] (0xc00091d3f0) (0xc00081af00) Stream added, broadcasting: 1\nI0519 00:13:48.410815 1998 log.go:172] (0xc00091d3f0) Reply frame received for 1\nI0519 00:13:48.410875 1998 log.go:172] (0xc00091d3f0) (0xc00081b4a0) Create stream\nI0519 00:13:48.410890 1998 log.go:172] (0xc00091d3f0) (0xc00081b4a0) Stream added, broadcasting: 3\nI0519 00:13:48.411885 1998 log.go:172] (0xc00091d3f0) Reply frame received for 3\nI0519 00:13:48.411924 1998 log.go:172] (0xc00091d3f0) (0xc000519360) Create stream\nI0519 00:13:48.411936 1998 log.go:172] (0xc00091d3f0) (0xc000519360) Stream added, broadcasting: 5\nI0519 00:13:48.412889 1998 log.go:172] (0xc00091d3f0) Reply frame received for 5\nI0519 00:13:48.482474 1998 log.go:172] (0xc00091d3f0) Data frame received for 5\nI0519 00:13:48.482514 1998 log.go:172] (0xc000519360) (5) Data frame handling\nI0519 00:13:48.482542 1998 log.go:172] (0xc000519360) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:13:48.487262 1998 log.go:172] (0xc00091d3f0) Data frame received for 3\nI0519 00:13:48.487293 1998 log.go:172] (0xc00081b4a0) (3) Data frame handling\nI0519 00:13:48.487338 1998 log.go:172] (0xc00081b4a0) (3) Data frame sent\nI0519 00:13:48.487970 1998 log.go:172] (0xc00091d3f0) Data frame received for 5\nI0519 00:13:48.488002 1998 log.go:172] (0xc000519360) (5) Data frame handling\nI0519 00:13:48.488060 1998 log.go:172] (0xc00091d3f0) Data frame received for 3\nI0519 00:13:48.488096 1998 log.go:172] (0xc00081b4a0) (3) Data frame handling\nI0519 00:13:48.490545 1998 log.go:172] (0xc00091d3f0) Data frame received for 1\nI0519 00:13:48.490593 1998 log.go:172] (0xc00081af00) (1) Data frame handling\nI0519 00:13:48.490627 1998 log.go:172] (0xc00081af00) (1) Data frame sent\nI0519 00:13:48.490654 1998 log.go:172] (0xc00091d3f0) (0xc00081af00) Stream removed, broadcasting: 1\nI0519 00:13:48.490687 1998 log.go:172] (0xc00091d3f0) Go away received\nI0519 00:13:48.491029 1998 log.go:172] (0xc00091d3f0) (0xc00081af00) Stream removed, broadcasting: 1\nI0519 00:13:48.491059 1998 log.go:172] (0xc00091d3f0) (0xc00081b4a0) Stream removed, broadcasting: 3\nI0519 00:13:48.491079 1998 log.go:172] (0xc00091d3f0) (0xc000519360) Stream removed, broadcasting: 5\n" May 19 00:13:48.495: INFO: stdout: "affinity-nodeport-timeout-sl7f2" May 19 00:14:03.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1802 execpod-affinitytbr9v -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:32028/' May 19 00:14:03.755: INFO: stderr: "I0519 00:14:03.635639 2014 log.go:172] (0xc0008ebc30) (0xc0008de640) Create stream\nI0519 00:14:03.635695 2014 log.go:172] (0xc0008ebc30) (0xc0008de640) Stream added, broadcasting: 1\nI0519 00:14:03.640547 2014 log.go:172] (0xc0008ebc30) Reply frame received for 1\nI0519 00:14:03.640614 2014 log.go:172] (0xc0008ebc30) (0xc0006cee60) Create stream\nI0519 00:14:03.640641 2014 log.go:172] (0xc0008ebc30) (0xc0006cee60) Stream added, broadcasting: 3\nI0519 00:14:03.642066 2014 log.go:172] (0xc0008ebc30) Reply frame received for 3\nI0519 00:14:03.642127 2014 log.go:172] (0xc0008ebc30) (0xc0006b0460) Create stream\nI0519 00:14:03.642146 2014 log.go:172] (0xc0008ebc30) (0xc0006b0460) Stream added, broadcasting: 5\nI0519 00:14:03.643340 2014 log.go:172] (0xc0008ebc30) Reply frame received for 5\nI0519 00:14:03.741484 2014 log.go:172] (0xc0008ebc30) Data frame received for 5\nI0519 00:14:03.741513 2014 log.go:172] (0xc0006b0460) (5) Data frame handling\nI0519 00:14:03.741536 2014 log.go:172] (0xc0006b0460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:32028/\nI0519 00:14:03.746780 2014 log.go:172] (0xc0008ebc30) Data frame received for 3\nI0519 00:14:03.746807 2014 log.go:172] (0xc0006cee60) (3) Data frame handling\nI0519 00:14:03.746829 2014 log.go:172] (0xc0006cee60) (3) Data frame sent\nI0519 00:14:03.747634 2014 log.go:172] (0xc0008ebc30) Data frame received for 3\nI0519 00:14:03.747660 2014 log.go:172] (0xc0006cee60) (3) Data frame handling\nI0519 00:14:03.748109 2014 log.go:172] (0xc0008ebc30) Data frame received for 5\nI0519 00:14:03.748140 2014 log.go:172] (0xc0006b0460) (5) Data frame handling\nI0519 00:14:03.749665 2014 log.go:172] (0xc0008ebc30) Data frame received for 1\nI0519 00:14:03.749692 2014 log.go:172] (0xc0008de640) (1) Data frame handling\nI0519 00:14:03.749711 2014 log.go:172] (0xc0008de640) (1) Data frame sent\nI0519 00:14:03.749817 2014 log.go:172] (0xc0008ebc30) (0xc0008de640) Stream removed, broadcasting: 1\nI0519 00:14:03.750013 2014 log.go:172] (0xc0008ebc30) Go away received\nI0519 00:14:03.750255 2014 log.go:172] (0xc0008ebc30) (0xc0008de640) Stream removed, broadcasting: 1\nI0519 00:14:03.750283 2014 log.go:172] (0xc0008ebc30) (0xc0006cee60) Stream removed, broadcasting: 3\nI0519 00:14:03.750307 2014 log.go:172] (0xc0008ebc30) (0xc0006b0460) Stream removed, broadcasting: 5\n" May 19 00:14:03.755: INFO: stdout: "affinity-nodeport-timeout-5kzqk" May 19 00:14:03.755: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1802, will wait for the garbage collector to delete the pods May 19 00:14:03.841: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 22.024167ms May 19 00:14:04.342: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.208184ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:14:15.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1802" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:76.017 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":90,"skipped":1666,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:14:15.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:14:15.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:14:17.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444055, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444055, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444055, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444055, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:14:20.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 19 00:14:20.775: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:14:20.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7703" for this suite. STEP: Destroying namespace "webhook-7703-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":91,"skipped":1667,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:14:20.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 19 00:14:21.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 19 00:14:21.708: INFO: stderr: "" May 19 00:14:21.708: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:14:21.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2385" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":92,"skipped":1672,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:14:21.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 00:14:21.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9165' May 19 00:14:22.259: INFO: stderr: "" May 19 00:14:22.259: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 19 00:14:22.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9165' May 19 00:14:26.347: INFO: stderr: "" May 19 00:14:26.347: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:14:26.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9165" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":93,"skipped":1696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:14:26.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 19 00:14:27.183: INFO: Pod name wrapped-volume-race-8154b7d5-7a84-46d2-90b1-61d395a4aff7: Found 0 pods out of 5 May 19 00:14:32.194: INFO: Pod name wrapped-volume-race-8154b7d5-7a84-46d2-90b1-61d395a4aff7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8154b7d5-7a84-46d2-90b1-61d395a4aff7 in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods May 19 00:14:48.314: INFO: Deleting ReplicationController wrapped-volume-race-8154b7d5-7a84-46d2-90b1-61d395a4aff7 took: 9.988664ms May 19 00:14:48.715: INFO: Terminating ReplicationController wrapped-volume-race-8154b7d5-7a84-46d2-90b1-61d395a4aff7 pods took: 400.312723ms STEP: Creating RC which spawns configmap-volume pods May 19 00:15:05.155: INFO: Pod name wrapped-volume-race-0729530a-e54e-4d0d-981c-2c1503a2a8fc: Found 0 pods out of 5 May 19 00:15:10.164: INFO: Pod name wrapped-volume-race-0729530a-e54e-4d0d-981c-2c1503a2a8fc: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0729530a-e54e-4d0d-981c-2c1503a2a8fc in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods May 19 00:15:26.247: INFO: Deleting ReplicationController wrapped-volume-race-0729530a-e54e-4d0d-981c-2c1503a2a8fc took: 8.436534ms May 19 00:15:26.647: INFO: Terminating ReplicationController wrapped-volume-race-0729530a-e54e-4d0d-981c-2c1503a2a8fc pods took: 400.308407ms STEP: Creating RC which spawns configmap-volume pods May 19 00:15:35.202: INFO: Pod name wrapped-volume-race-ba637ca9-950c-4229-858f-14f22a469a8b: Found 0 pods out of 5 May 19 00:15:40.211: INFO: Pod name wrapped-volume-race-ba637ca9-950c-4229-858f-14f22a469a8b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ba637ca9-950c-4229-858f-14f22a469a8b in namespace emptydir-wrapper-8588, will wait for the garbage collector to delete the pods May 19 00:15:56.329: INFO: Deleting ReplicationController wrapped-volume-race-ba637ca9-950c-4229-858f-14f22a469a8b took: 21.938044ms May 19 00:15:56.629: INFO: Terminating ReplicationController wrapped-volume-race-ba637ca9-950c-4229-858f-14f22a469a8b pods took: 300.229339ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:05.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8588" for this suite. • [SLOW TEST:99.231 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":94,"skipped":1744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:05.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:16:05.694: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 19 00:16:10.720: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 00:16:10.721: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 00:16:16.796: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9036 /apis/apps/v1/namespaces/deployment-9036/deployments/test-cleanup-deployment daa4ef7d-e9be-431b-b48b-301f8440fbfe 5816382 1 2020-05-19 00:16:10 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-05-19 00:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:16:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003084958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 00:16:10 +0000 UTC,LastTransitionTime:2020-05-19 00:16:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-6688745694" has successfully progressed.,LastUpdateTime:2020-05-19 00:16:14 +0000 UTC,LastTransitionTime:2020-05-19 00:16:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 00:16:16.798: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-9036 /apis/apps/v1/namespaces/deployment-9036/replicasets/test-cleanup-deployment-6688745694 b2b8fc85-f274-4307-8e0f-8b5c1527be4e 5816366 1 2020-05-19 00:16:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment daa4ef7d-e9be-431b-b48b-301f8440fbfe 0xc00310c167 0xc00310c168}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:16:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"daa4ef7d-e9be-431b-b48b-301f8440fbfe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00310c2e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 00:16:16.801: INFO: Pod "test-cleanup-deployment-6688745694-7l7xl" is available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-7l7xl test-cleanup-deployment-6688745694- deployment-9036 /api/v1/namespaces/deployment-9036/pods/test-cleanup-deployment-6688745694-7l7xl 03d2d639-4b17-4221-ba71-6dcbb44900f2 5816365 0 2020-05-19 00:16:10 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 b2b8fc85-f274-4307-8e0f-8b5c1527be4e 0xc003084d97 0xc003084d98}] [] [{kube-controller-manager Update v1 2020-05-19 00:16:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b2b8fc85-f274-4307-8e0f-8b5c1527be4e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:16:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rjv8f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rjv8f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rjv8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:16:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:16:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:16:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:16:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.124,StartTime:2020-05-19 00:16:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:16:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://16571b7518d8120b564f8f067db5f51616a0317969ca2b7f51a4a308bd8e85e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:16.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9036" for this suite. • [SLOW TEST:11.215 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":95,"skipped":1772,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:16.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 19 00:16:16.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3009' May 19 00:16:17.027: INFO: stderr: "" May 19 00:16:17.027: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 19 00:16:22.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3009 -o json' May 19 00:16:22.206: INFO: stderr: "" May 19 00:16:22.206: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-19T00:16:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-19T00:16:17Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.122\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-19T00:16:20Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3009\",\n \"resourceVersion\": \"5816456\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3009/pods/e2e-test-httpd-pod\",\n \"uid\": \"9960a9d1-2082-4af2-a2d4-e7830d0c91b9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-mxh5k\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-mxh5k\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-mxh5k\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T00:16:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T00:16:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T00:16:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-19T00:16:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3eb5e280722c475c841b95a5a4a1dea8c4c9072b18da2c0318ae1396bada5c42\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-19T00:16:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.122\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.122\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-19T00:16:17Z\"\n }\n}\n" STEP: replace the image in the pod May 19 00:16:22.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3009' May 19 00:16:22.812: INFO: stderr: "" May 19 00:16:22.812: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 19 00:16:22.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3009' May 19 00:16:35.245: INFO: stderr: "" May 19 00:16:35.245: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:35.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3009" for this suite. • [SLOW TEST:18.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":96,"skipped":1778,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:35.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:40.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1562" for this suite. • [SLOW TEST:5.125 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":97,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:40.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 19 00:16:40.444: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:40.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6477" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":98,"skipped":1802,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:40.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:16:56.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3641" for this suite. • [SLOW TEST:16.365 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":99,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:16:56.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5713 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5713 STEP: creating replication controller externalsvc in namespace services-5713 I0519 00:16:57.177241 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5713, replica count: 2 I0519 00:17:00.227649 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:17:03.227917 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 19 00:17:03.255: INFO: Creating new exec pod May 19 00:17:07.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5713 execpod4jsp2 -- /bin/sh -x -c nslookup clusterip-service' May 19 00:17:07.568: INFO: stderr: "I0519 00:17:07.436994 2195 log.go:172] (0xc0009711e0) (0xc00082c500) Create stream\nI0519 00:17:07.437041 2195 log.go:172] (0xc0009711e0) (0xc00082c500) Stream added, broadcasting: 1\nI0519 00:17:07.446823 2195 log.go:172] (0xc0009711e0) Reply frame received for 1\nI0519 00:17:07.446881 2195 log.go:172] (0xc0009711e0) (0xc000821400) Create stream\nI0519 00:17:07.446902 2195 log.go:172] (0xc0009711e0) (0xc000821400) Stream added, broadcasting: 3\nI0519 00:17:07.448411 2195 log.go:172] (0xc0009711e0) Reply frame received for 3\nI0519 00:17:07.448434 2195 log.go:172] (0xc0009711e0) (0xc000618140) Create stream\nI0519 00:17:07.448443 2195 log.go:172] (0xc0009711e0) (0xc000618140) Stream added, broadcasting: 5\nI0519 00:17:07.450178 2195 log.go:172] (0xc0009711e0) Reply frame received for 5\nI0519 00:17:07.529459 2195 log.go:172] (0xc0009711e0) Data frame received for 5\nI0519 00:17:07.529507 2195 log.go:172] (0xc000618140) (5) Data frame handling\nI0519 00:17:07.529542 2195 log.go:172] (0xc000618140) (5) Data frame sent\n+ nslookup clusterip-service\nI0519 00:17:07.559615 2195 log.go:172] (0xc0009711e0) Data frame received for 3\nI0519 00:17:07.559634 2195 log.go:172] (0xc000821400) (3) Data frame handling\nI0519 00:17:07.559648 2195 log.go:172] (0xc000821400) (3) Data frame sent\nI0519 00:17:07.560622 2195 log.go:172] (0xc0009711e0) Data frame received for 3\nI0519 00:17:07.560637 2195 log.go:172] (0xc000821400) (3) Data frame handling\nI0519 00:17:07.560650 2195 log.go:172] (0xc000821400) (3) Data frame sent\nI0519 00:17:07.561321 2195 log.go:172] (0xc0009711e0) Data frame received for 3\nI0519 00:17:07.561336 2195 log.go:172] (0xc000821400) (3) Data frame handling\nI0519 00:17:07.561358 2195 log.go:172] (0xc0009711e0) Data frame received for 5\nI0519 00:17:07.561371 2195 log.go:172] (0xc000618140) (5) Data frame handling\nI0519 00:17:07.563552 2195 log.go:172] (0xc0009711e0) Data frame received for 1\nI0519 00:17:07.563568 2195 log.go:172] (0xc00082c500) (1) Data frame handling\nI0519 00:17:07.563584 2195 log.go:172] (0xc00082c500) (1) Data frame sent\nI0519 00:17:07.563690 2195 log.go:172] (0xc0009711e0) (0xc00082c500) Stream removed, broadcasting: 1\nI0519 00:17:07.563758 2195 log.go:172] (0xc0009711e0) Go away received\nI0519 00:17:07.563972 2195 log.go:172] (0xc0009711e0) (0xc00082c500) Stream removed, broadcasting: 1\nI0519 00:17:07.563988 2195 log.go:172] (0xc0009711e0) (0xc000821400) Stream removed, broadcasting: 3\nI0519 00:17:07.563998 2195 log.go:172] (0xc0009711e0) (0xc000618140) Stream removed, broadcasting: 5\n" May 19 00:17:07.568: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5713.svc.cluster.local\tcanonical name = externalsvc.services-5713.svc.cluster.local.\nName:\texternalsvc.services-5713.svc.cluster.local\nAddress: 10.97.197.191\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5713, will wait for the garbage collector to delete the pods May 19 00:17:07.628: INFO: Deleting ReplicationController externalsvc took: 7.13493ms May 19 00:17:07.928: INFO: Terminating ReplicationController externalsvc pods took: 300.287706ms May 19 00:17:15.359: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:17:15.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5713" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:18.549 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":100,"skipped":1858,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:17:15.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:17:16.134: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:17:18.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:17:20.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444236, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:17:23.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:17:23.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4631" for this suite. STEP: Destroying namespace "webhook-4631-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.222 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":101,"skipped":1867,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:17:23.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 00:17:28.361: INFO: Successfully updated pod "labelsupdatee83eec33-6a7f-4b96-9903-e7a6fee6b7fa" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:17:32.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1147" for this suite. • [SLOW TEST:8.720 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:17:32.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 00:17:36.555: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:17:36.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6746" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:17:36.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:10.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9260" for this suite. • [SLOW TEST:33.944 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1935,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:10.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-bd7f0919-87d4-489a-b016-310c4a0d935f [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:10.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9524" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":105,"skipped":1936,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:10.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:18:11.000: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 19 00:18:11.063: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:11.078: INFO: Number of nodes with available pods: 0 May 19 00:18:11.078: INFO: Node latest-worker is running more than one daemon pod May 19 00:18:12.082: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:12.085: INFO: Number of nodes with available pods: 0 May 19 00:18:12.085: INFO: Node latest-worker is running more than one daemon pod May 19 00:18:13.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:13.088: INFO: Number of nodes with available pods: 0 May 19 00:18:13.088: INFO: Node latest-worker is running more than one daemon pod May 19 00:18:14.081: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:14.085: INFO: Number of nodes with available pods: 0 May 19 00:18:14.085: INFO: Node latest-worker is running more than one daemon pod May 19 00:18:15.082: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:15.086: INFO: Number of nodes with available pods: 1 May 19 00:18:15.086: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:18:16.106: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:16.137: INFO: Number of nodes with available pods: 2 May 19 00:18:16.137: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 19 00:18:16.339: INFO: Wrong image for pod: daemon-set-cbw6v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:16.339: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:16.388: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:17.417: INFO: Wrong image for pod: daemon-set-cbw6v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:17.417: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:17.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:18.393: INFO: Wrong image for pod: daemon-set-cbw6v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:18.394: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:18.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:19.392: INFO: Wrong image for pod: daemon-set-cbw6v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:19.392: INFO: Pod daemon-set-cbw6v is not available May 19 00:18:19.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:19.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:20.394: INFO: Pod daemon-set-6bz9g is not available May 19 00:18:20.394: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:20.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:21.392: INFO: Pod daemon-set-6bz9g is not available May 19 00:18:21.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:21.434: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:22.394: INFO: Pod daemon-set-6bz9g is not available May 19 00:18:22.394: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:22.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:23.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:23.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:24.393: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:24.394: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:24.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:25.393: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:25.393: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:25.397: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:26.428: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:26.429: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:26.433: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:27.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:27.392: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:27.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:28.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:28.392: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:28.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:29.394: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:29.394: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:29.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:30.417: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:30.417: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:30.421: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:31.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:31.392: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:31.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:32.392: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:32.392: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:32.411: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:33.393: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:33.393: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:33.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:34.394: INFO: Wrong image for pod: daemon-set-nk9bs. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 19 00:18:34.394: INFO: Pod daemon-set-nk9bs is not available May 19 00:18:34.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:35.392: INFO: Pod daemon-set-dqxdv is not available May 19 00:18:35.396: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 19 00:18:35.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:35.404: INFO: Number of nodes with available pods: 1 May 19 00:18:35.404: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:18:36.409: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:36.412: INFO: Number of nodes with available pods: 1 May 19 00:18:36.412: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:18:37.427: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:37.429: INFO: Number of nodes with available pods: 1 May 19 00:18:37.429: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:18:38.410: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:38.414: INFO: Number of nodes with available pods: 1 May 19 00:18:38.414: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:18:39.410: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:18:39.414: INFO: Number of nodes with available pods: 2 May 19 00:18:39.414: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8340, will wait for the garbage collector to delete the pods May 19 00:18:39.496: INFO: Deleting DaemonSet.extensions daemon-set took: 18.319135ms May 19 00:18:39.797: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.422759ms May 19 00:18:45.301: INFO: Number of nodes with available pods: 0 May 19 00:18:45.301: INFO: Number of running nodes: 0, number of available pods: 0 May 19 00:18:45.304: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8340/daemonsets","resourceVersion":"5817382"},"items":null} May 19 00:18:45.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8340/pods","resourceVersion":"5817382"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8340" for this suite. • [SLOW TEST:34.423 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":106,"skipped":1942,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:45.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:18:45.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22" in namespace "projected-6206" to be "Succeeded or Failed" May 19 00:18:45.411: INFO: Pod "downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22": Phase="Pending", Reason="", readiness=false. Elapsed: 3.482231ms May 19 00:18:47.414: INFO: Pod "downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007136049s May 19 00:18:49.419: INFO: Pod "downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012031908s STEP: Saw pod success May 19 00:18:49.419: INFO: Pod "downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22" satisfied condition "Succeeded or Failed" May 19 00:18:49.422: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22 container client-container: STEP: delete the pod May 19 00:18:49.522: INFO: Waiting for pod downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22 to disappear May 19 00:18:49.650: INFO: Pod downwardapi-volume-802d88c5-e86e-4827-a9a1-a1f67b4cef22 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:49.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6206" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":107,"skipped":1947,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:49.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-952ca947-7dca-4ee7-8e7e-7521b4dd9bbc STEP: Creating a pod to test consume configMaps May 19 00:18:49.853: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d" in namespace "configmap-7508" to be "Succeeded or Failed" May 19 00:18:49.857: INFO: Pod "pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.981486ms May 19 00:18:51.865: INFO: Pod "pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011948611s May 19 00:18:53.896: INFO: Pod "pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042412553s STEP: Saw pod success May 19 00:18:53.896: INFO: Pod "pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d" satisfied condition "Succeeded or Failed" May 19 00:18:53.900: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d container configmap-volume-test: STEP: delete the pod May 19 00:18:53.961: INFO: Waiting for pod pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d to disappear May 19 00:18:53.974: INFO: Pod pod-configmaps-cd3b78cc-6a5f-46f4-ac47-33c196beb80d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:53.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7508" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":108,"skipped":1957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:53.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:18:54.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-606" for this suite. STEP: Destroying namespace "nspatchtest-ad25b64d-398a-4008-9e53-c34e818e3fb3-5230" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":109,"skipped":1989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:18:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 19 00:18:54.387: INFO: Waiting up to 1m0s for all nodes to be ready May 19 00:19:54.411: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:19:54.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 19 00:19:58.570: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:20:19.087: INFO: pods created so far: [1 1 1] May 19 00:20:19.087: INFO: length of pods created so far: 3 May 19 00:20:35.096: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:20:42.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7947" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:20:42.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5184" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:108.024 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":110,"skipped":2012,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:20:42.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5414 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5414 May 19 00:20:42.379: INFO: Found 0 stateful pods, waiting for 1 May 19 00:20:52.384: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 00:20:52.403: INFO: Deleting all statefulset in ns statefulset-5414 May 19 00:20:52.423: INFO: Scaling statefulset ss to 0 May 19 00:21:12.536: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:21:12.540: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:21:12.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5414" for this suite. • [SLOW TEST:30.355 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":111,"skipped":2024,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:21:12.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 00:21:12.727: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:12.738: INFO: Number of nodes with available pods: 0 May 19 00:21:12.738: INFO: Node latest-worker is running more than one daemon pod May 19 00:21:13.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:13.748: INFO: Number of nodes with available pods: 0 May 19 00:21:13.748: INFO: Node latest-worker is running more than one daemon pod May 19 00:21:14.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:14.748: INFO: Number of nodes with available pods: 0 May 19 00:21:14.748: INFO: Node latest-worker is running more than one daemon pod May 19 00:21:15.745: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:15.779: INFO: Number of nodes with available pods: 0 May 19 00:21:15.779: INFO: Node latest-worker is running more than one daemon pod May 19 00:21:16.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:16.749: INFO: Number of nodes with available pods: 0 May 19 00:21:16.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:21:17.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:17.748: INFO: Number of nodes with available pods: 2 May 19 00:21:17.748: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 19 00:21:17.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:17.796: INFO: Number of nodes with available pods: 1 May 19 00:21:17.796: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:18.802: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:18.806: INFO: Number of nodes with available pods: 1 May 19 00:21:18.806: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:19.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:19.805: INFO: Number of nodes with available pods: 1 May 19 00:21:19.805: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:20.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:20.806: INFO: Number of nodes with available pods: 1 May 19 00:21:20.806: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:21.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:21.806: INFO: Number of nodes with available pods: 1 May 19 00:21:21.806: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:22.806: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:22.810: INFO: Number of nodes with available pods: 1 May 19 00:21:22.810: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:23.800: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:23.804: INFO: Number of nodes with available pods: 1 May 19 00:21:23.804: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:24.801: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:24.806: INFO: Number of nodes with available pods: 1 May 19 00:21:24.806: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:25.800: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:25.803: INFO: Number of nodes with available pods: 1 May 19 00:21:25.803: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:26.838: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:26.842: INFO: Number of nodes with available pods: 1 May 19 00:21:26.842: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:27.850: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:27.859: INFO: Number of nodes with available pods: 1 May 19 00:21:27.859: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:21:28.802: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:21:28.806: INFO: Number of nodes with available pods: 2 May 19 00:21:28.806: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4579, will wait for the garbage collector to delete the pods May 19 00:21:28.868: INFO: Deleting DaemonSet.extensions daemon-set took: 6.567833ms May 19 00:21:28.969: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.295301ms May 19 00:21:35.400: INFO: Number of nodes with available pods: 0 May 19 00:21:35.400: INFO: Number of running nodes: 0, number of available pods: 0 May 19 00:21:35.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4579/daemonsets","resourceVersion":"5818295"},"items":null} May 19 00:21:35.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4579/pods","resourceVersion":"5818295"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:21:35.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4579" for this suite. • [SLOW TEST:22.835 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":112,"skipped":2033,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:21:35.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:21:41.608: INFO: DNS probes using dns-test-79185a75-b5d4-42ef-b542-ac65a02dea3e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:21:49.774: INFO: File wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:49.777: INFO: File jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:49.777: INFO: Lookups using dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 failed for: [wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local] May 19 00:21:54.782: INFO: File wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:54.786: INFO: File jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:54.786: INFO: Lookups using dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 failed for: [wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local] May 19 00:21:59.783: INFO: File wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:59.787: INFO: File jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:21:59.788: INFO: Lookups using dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 failed for: [wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local] May 19 00:22:04.784: INFO: File wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:22:04.789: INFO: File jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:22:04.789: INFO: Lookups using dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 failed for: [wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local] May 19 00:22:09.782: INFO: File wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:22:09.786: INFO: File jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local from pod dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 contains 'foo.example.com. ' instead of 'bar.example.com.' May 19 00:22:09.786: INFO: Lookups using dns-5377/dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 failed for: [wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local] May 19 00:22:14.786: INFO: DNS probes using dns-test-ff61158e-9113-4e9f-ba79-c0fa7f127216 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5377.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5377.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:22:23.651: INFO: DNS probes using dns-test-5eb7d007-f9cb-49c0-8c57-cf1f648329da succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:22:23.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5377" for this suite. • [SLOW TEST:48.361 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":113,"skipped":2047,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:22:23.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:22:24.331: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0" in namespace "security-context-test-1532" to be "Succeeded or Failed" May 19 00:22:24.358: INFO: Pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.619369ms May 19 00:22:26.396: INFO: Pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065287548s May 19 00:22:28.400: INFO: Pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0": Phase="Running", Reason="", readiness=true. Elapsed: 4.069018375s May 19 00:22:30.405: INFO: Pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074052877s May 19 00:22:30.405: INFO: Pod "busybox-readonly-false-cfc25376-3e90-49c4-a937-3850235c87c0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:22:30.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1532" for this suite. • [SLOW TEST:6.629 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":2059,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:22:30.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:22:30.523: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:22:34.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2695" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":115,"skipped":2060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:22:34.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:22:35.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:22:37.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:22:39.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444555, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:22:42.191: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:22:42.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-124" for this suite. STEP: Destroying namespace "webhook-124-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.803 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":116,"skipped":2086,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:22:42.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 19 00:22:42.464: INFO: namespace kubectl-6010 May 19 00:22:42.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6010' May 19 00:22:46.239: INFO: stderr: "" May 19 00:22:46.239: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 00:22:47.244: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:22:47.244: INFO: Found 0 / 1 May 19 00:22:48.263: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:22:48.264: INFO: Found 0 / 1 May 19 00:22:49.243: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:22:49.243: INFO: Found 0 / 1 May 19 00:22:50.242: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:22:50.242: INFO: Found 1 / 1 May 19 00:22:50.242: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 00:22:50.245: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:22:50.245: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 00:22:50.245: INFO: wait on agnhost-master startup in kubectl-6010 May 19 00:22:50.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-ld95m agnhost-master --namespace=kubectl-6010' May 19 00:22:50.364: INFO: stderr: "" May 19 00:22:50.364: INFO: stdout: "Paused\n" STEP: exposing RC May 19 00:22:50.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6010' May 19 00:22:50.624: INFO: stderr: "" May 19 00:22:50.624: INFO: stdout: "service/rm2 exposed\n" May 19 00:22:50.667: INFO: Service rm2 in namespace kubectl-6010 found. STEP: exposing service May 19 00:22:52.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6010' May 19 00:22:52.807: INFO: stderr: "" May 19 00:22:52.807: INFO: stdout: "service/rm3 exposed\n" May 19 00:22:52.817: INFO: Service rm3 in namespace kubectl-6010 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:22:54.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6010" for this suite. • [SLOW TEST:12.432 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":117,"skipped":2095,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:22:54.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9770 STEP: creating service affinity-clusterip-transition in namespace services-9770 STEP: creating replication controller affinity-clusterip-transition in namespace services-9770 I0519 00:22:54.986175 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9770, replica count: 3 I0519 00:22:58.036513 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:23:01.036742 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:23:01.040: INFO: Creating new exec pod May 19 00:23:06.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9770 execpod-affinitymxkff -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 19 00:23:06.346: INFO: stderr: "I0519 00:23:06.233979 2316 log.go:172] (0xc00003a0b0) (0xc00052c3c0) Create stream\nI0519 00:23:06.234045 2316 log.go:172] (0xc00003a0b0) (0xc00052c3c0) Stream added, broadcasting: 1\nI0519 00:23:06.236742 2316 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0519 00:23:06.236791 2316 log.go:172] (0xc00003a0b0) (0xc0004eaf00) Create stream\nI0519 00:23:06.236807 2316 log.go:172] (0xc00003a0b0) (0xc0004eaf00) Stream added, broadcasting: 3\nI0519 00:23:06.238041 2316 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0519 00:23:06.238087 2316 log.go:172] (0xc00003a0b0) (0xc00014f900) Create stream\nI0519 00:23:06.238104 2316 log.go:172] (0xc00003a0b0) (0xc00014f900) Stream added, broadcasting: 5\nI0519 00:23:06.239062 2316 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0519 00:23:06.325930 2316 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:23:06.325959 2316 log.go:172] (0xc00014f900) (5) Data frame handling\nI0519 00:23:06.325977 2316 log.go:172] (0xc00014f900) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0519 00:23:06.338275 2316 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:23:06.338316 2316 log.go:172] (0xc00014f900) (5) Data frame handling\nI0519 00:23:06.338370 2316 log.go:172] (0xc00014f900) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0519 00:23:06.338748 2316 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0519 00:23:06.338782 2316 log.go:172] (0xc0004eaf00) (3) Data frame handling\nI0519 00:23:06.338841 2316 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 00:23:06.338861 2316 log.go:172] (0xc00014f900) (5) Data frame handling\nI0519 00:23:06.340312 2316 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0519 00:23:06.340339 2316 log.go:172] (0xc00052c3c0) (1) Data frame handling\nI0519 00:23:06.340377 2316 log.go:172] (0xc00052c3c0) (1) Data frame sent\nI0519 00:23:06.340406 2316 log.go:172] (0xc00003a0b0) (0xc00052c3c0) Stream removed, broadcasting: 1\nI0519 00:23:06.340429 2316 log.go:172] (0xc00003a0b0) Go away received\nI0519 00:23:06.340922 2316 log.go:172] (0xc00003a0b0) (0xc00052c3c0) Stream removed, broadcasting: 1\nI0519 00:23:06.340958 2316 log.go:172] (0xc00003a0b0) (0xc0004eaf00) Stream removed, broadcasting: 3\nI0519 00:23:06.340981 2316 log.go:172] (0xc00003a0b0) (0xc00014f900) Stream removed, broadcasting: 5\n" May 19 00:23:06.346: INFO: stdout: "" May 19 00:23:06.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9770 execpod-affinitymxkff -- /bin/sh -x -c nc -zv -t -w 2 10.105.97.18 80' May 19 00:23:06.552: INFO: stderr: "I0519 00:23:06.483041 2336 log.go:172] (0xc000a03600) (0xc000a1a5a0) Create stream\nI0519 00:23:06.483114 2336 log.go:172] (0xc000a03600) (0xc000a1a5a0) Stream added, broadcasting: 1\nI0519 00:23:06.488587 2336 log.go:172] (0xc000a03600) Reply frame received for 1\nI0519 00:23:06.488636 2336 log.go:172] (0xc000a03600) (0xc000694be0) Create stream\nI0519 00:23:06.488655 2336 log.go:172] (0xc000a03600) (0xc000694be0) Stream added, broadcasting: 3\nI0519 00:23:06.489705 2336 log.go:172] (0xc000a03600) Reply frame received for 3\nI0519 00:23:06.489756 2336 log.go:172] (0xc000a03600) (0xc000520140) Create stream\nI0519 00:23:06.489771 2336 log.go:172] (0xc000a03600) (0xc000520140) Stream added, broadcasting: 5\nI0519 00:23:06.490781 2336 log.go:172] (0xc000a03600) Reply frame received for 5\nI0519 00:23:06.546174 2336 log.go:172] (0xc000a03600) Data frame received for 3\nI0519 00:23:06.546206 2336 log.go:172] (0xc000694be0) (3) Data frame handling\nI0519 00:23:06.546251 2336 log.go:172] (0xc000a03600) Data frame received for 5\nI0519 00:23:06.546326 2336 log.go:172] (0xc000520140) (5) Data frame handling\nI0519 00:23:06.546374 2336 log.go:172] (0xc000520140) (5) Data frame sent\nI0519 00:23:06.546433 2336 log.go:172] (0xc000a03600) Data frame received for 5\nI0519 00:23:06.546456 2336 log.go:172] (0xc000520140) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.97.18 80\nConnection to 10.105.97.18 80 port [tcp/http] succeeded!\nI0519 00:23:06.548379 2336 log.go:172] (0xc000a03600) Data frame received for 1\nI0519 00:23:06.548418 2336 log.go:172] (0xc000a1a5a0) (1) Data frame handling\nI0519 00:23:06.548450 2336 log.go:172] (0xc000a1a5a0) (1) Data frame sent\nI0519 00:23:06.548471 2336 log.go:172] (0xc000a03600) (0xc000a1a5a0) Stream removed, broadcasting: 1\nI0519 00:23:06.548502 2336 log.go:172] (0xc000a03600) Go away received\nI0519 00:23:06.548764 2336 log.go:172] (0xc000a03600) (0xc000a1a5a0) Stream removed, broadcasting: 1\nI0519 00:23:06.548777 2336 log.go:172] (0xc000a03600) (0xc000694be0) Stream removed, broadcasting: 3\nI0519 00:23:06.548783 2336 log.go:172] (0xc000a03600) (0xc000520140) Stream removed, broadcasting: 5\n" May 19 00:23:06.553: INFO: stdout: "" May 19 00:23:06.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9770 execpod-affinitymxkff -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.97.18:80/ ; done' May 19 00:23:06.906: INFO: stderr: "I0519 00:23:06.720611 2358 log.go:172] (0xc000b12210) (0xc0008a0f00) Create stream\nI0519 00:23:06.720658 2358 log.go:172] (0xc000b12210) (0xc0008a0f00) Stream added, broadcasting: 1\nI0519 00:23:06.723622 2358 log.go:172] (0xc000b12210) Reply frame received for 1\nI0519 00:23:06.723683 2358 log.go:172] (0xc000b12210) (0xc00035dea0) Create stream\nI0519 00:23:06.723698 2358 log.go:172] (0xc000b12210) (0xc00035dea0) Stream added, broadcasting: 3\nI0519 00:23:06.724483 2358 log.go:172] (0xc000b12210) Reply frame received for 3\nI0519 00:23:06.724524 2358 log.go:172] (0xc000b12210) (0xc00053a3c0) Create stream\nI0519 00:23:06.724539 2358 log.go:172] (0xc000b12210) (0xc00053a3c0) Stream added, broadcasting: 5\nI0519 00:23:06.725590 2358 log.go:172] (0xc000b12210) Reply frame received for 5\nI0519 00:23:06.796127 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.796186 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.796222 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.796257 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.796289 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.796319 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.799692 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.799723 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.799751 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.800002 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.800029 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.800040 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.800059 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.800067 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.800076 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.803818 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.803836 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.803852 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.804678 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.804717 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.804733 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.804751 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.804761 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.804776 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\nI0519 00:23:06.804818 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.804833 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.804881 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.812363 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.812394 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.812422 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.813850 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.813879 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.813902 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.813915 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\nI0519 00:23:06.813922 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.813930 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.813955 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\nI0519 00:23:06.813964 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.813973 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.818819 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.818832 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.818840 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.819331 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.819355 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.819364 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.819376 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.819382 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.819389 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.827115 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.827140 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.827156 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.827753 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.827798 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.827865 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.827888 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.827918 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.827948 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.838505 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.838535 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.838562 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.839261 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.839281 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.839289 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.839297 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.839303 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.839310 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.843639 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.843666 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.843682 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.844031 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.844052 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.844076 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.844090 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.844103 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.844112 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.851846 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.851858 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.851873 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.852254 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.852277 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.852286 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.852295 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.852302 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.852318 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.855780 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.855793 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.855803 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.856220 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.856246 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.856256 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.856270 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.856281 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.856296 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.860257 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.860274 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.860283 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.860648 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.860666 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.860677 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.860689 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.860695 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.860701 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.868324 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.868353 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.868381 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.869034 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.869053 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.869062 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.869071 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.869103 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.869252 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.874172 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.874184 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.874194 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.874630 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.874640 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.874647 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.874656 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.874661 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.874665 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.880553 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.880568 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.880581 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.881279 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.881305 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.881321 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.881348 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.881364 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.881380 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.886129 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.886145 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.886157 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.886414 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.886454 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.886479 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.886499 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.886517 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.886536 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.892311 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.892333 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.892360 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.892997 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.893023 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.893034 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.893050 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.893058 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.893069 2358 log.go:172] (0xc00053a3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:06.899355 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.899379 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.899395 2358 log.go:172] (0xc00035dea0) (3) Data frame sent\nI0519 00:23:06.899928 2358 log.go:172] (0xc000b12210) Data frame received for 3\nI0519 00:23:06.899952 2358 log.go:172] (0xc00035dea0) (3) Data frame handling\nI0519 00:23:06.899987 2358 log.go:172] (0xc000b12210) Data frame received for 5\nI0519 00:23:06.900014 2358 log.go:172] (0xc00053a3c0) (5) Data frame handling\nI0519 00:23:06.901821 2358 log.go:172] (0xc000b12210) Data frame received for 1\nI0519 00:23:06.901841 2358 log.go:172] (0xc0008a0f00) (1) Data frame handling\nI0519 00:23:06.901852 2358 log.go:172] (0xc0008a0f00) (1) Data frame sent\nI0519 00:23:06.901876 2358 log.go:172] (0xc000b12210) (0xc0008a0f00) Stream removed, broadcasting: 1\nI0519 00:23:06.901888 2358 log.go:172] (0xc000b12210) Go away received\nI0519 00:23:06.902250 2358 log.go:172] (0xc000b12210) (0xc0008a0f00) Stream removed, broadcasting: 1\nI0519 00:23:06.902264 2358 log.go:172] (0xc000b12210) (0xc00035dea0) Stream removed, broadcasting: 3\nI0519 00:23:06.902272 2358 log.go:172] (0xc000b12210) (0xc00053a3c0) Stream removed, broadcasting: 5\n" May 19 00:23:06.907: INFO: stdout: "\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-bvf7k\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-bvf7k\naffinity-clusterip-transition-bvf7k\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-p5qgd\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-bvf7k" May 19 00:23:06.907: INFO: Received response from host: May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-bvf7k May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-bvf7k May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-bvf7k May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-p5qgd May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:06.907: INFO: Received response from host: affinity-clusterip-transition-bvf7k May 19 00:23:06.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9770 execpod-affinitymxkff -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.97.18:80/ ; done' May 19 00:23:07.278: INFO: stderr: "I0519 00:23:07.107370 2376 log.go:172] (0xc00097b130) (0xc00098e1e0) Create stream\nI0519 00:23:07.107447 2376 log.go:172] (0xc00097b130) (0xc00098e1e0) Stream added, broadcasting: 1\nI0519 00:23:07.111882 2376 log.go:172] (0xc00097b130) Reply frame received for 1\nI0519 00:23:07.111920 2376 log.go:172] (0xc00097b130) (0xc000472280) Create stream\nI0519 00:23:07.111931 2376 log.go:172] (0xc00097b130) (0xc000472280) Stream added, broadcasting: 3\nI0519 00:23:07.112864 2376 log.go:172] (0xc00097b130) Reply frame received for 3\nI0519 00:23:07.112901 2376 log.go:172] (0xc00097b130) (0xc0003badc0) Create stream\nI0519 00:23:07.112933 2376 log.go:172] (0xc00097b130) (0xc0003badc0) Stream added, broadcasting: 5\nI0519 00:23:07.114199 2376 log.go:172] (0xc00097b130) Reply frame received for 5\nI0519 00:23:07.185976 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.186028 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.186066 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.186083 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.186107 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.186124 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.192857 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.192885 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.192913 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.193977 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.194003 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.194014 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.194030 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.194037 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.194044 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.198493 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.198512 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.198526 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.199042 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.199068 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.199090 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.199133 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.199157 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.199170 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.203322 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.203344 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.203370 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.204019 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.204046 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.204077 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ I0519 00:23:07.204096 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.204142 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.204158 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\nechoI0519 00:23:07.204172 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.204187 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.204204 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\nI0519 00:23:07.204218 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.204233 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\n\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.204254 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.204305 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.204321 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.204344 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\nI0519 00:23:07.207326 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.207353 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.207381 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.207684 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.207698 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.207707 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.207721 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.207729 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.207738 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.213557 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.213587 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.213603 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.213616 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.213624 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.213645 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.213653 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.213661 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.213676 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.216658 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.216676 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.216685 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.217661 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.217672 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.217680 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.217693 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.217724 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.217739 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.222403 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.222422 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.222435 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.223160 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.223229 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.223241 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.223251 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.223257 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.223263 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.227177 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.227192 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.227210 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.227601 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.227622 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.227631 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.227640 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.227645 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.227650 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.233046 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.233058 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.233075 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.233666 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.233694 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.233708 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.233815 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.233828 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.233838 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.238005 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.238044 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.238075 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.238352 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.238370 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.238386 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\nI0519 00:23:07.238398 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.238408 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.238422 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\nI0519 00:23:07.238433 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.238461 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.238498 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.242482 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.242508 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.242538 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.243037 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.243069 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.243094 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.243641 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.243664 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.243690 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.248107 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.248123 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.248134 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.248429 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.248449 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.248460 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.248476 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.248486 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.248497 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.252797 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.252857 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.252870 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.253441 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.253484 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.253503 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.253520 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.253537 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.253562 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.257546 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.257561 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.257575 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.257868 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.257881 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.257896 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.257934 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.257954 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.257967 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.264355 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.264375 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.264392 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.264771 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.264792 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.264816 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.264830 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.264850 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.264864 2376 log.go:172] (0xc0003badc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.97.18:80/\nI0519 00:23:07.269938 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.269953 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.269969 2376 log.go:172] (0xc000472280) (3) Data frame sent\nI0519 00:23:07.271218 2376 log.go:172] (0xc00097b130) Data frame received for 3\nI0519 00:23:07.271240 2376 log.go:172] (0xc000472280) (3) Data frame handling\nI0519 00:23:07.271363 2376 log.go:172] (0xc00097b130) Data frame received for 5\nI0519 00:23:07.271387 2376 log.go:172] (0xc0003badc0) (5) Data frame handling\nI0519 00:23:07.273055 2376 log.go:172] (0xc00097b130) Data frame received for 1\nI0519 00:23:07.273090 2376 log.go:172] (0xc00098e1e0) (1) Data frame handling\nI0519 00:23:07.273333 2376 log.go:172] (0xc00098e1e0) (1) Data frame sent\nI0519 00:23:07.273374 2376 log.go:172] (0xc00097b130) (0xc00098e1e0) Stream removed, broadcasting: 1\nI0519 00:23:07.273407 2376 log.go:172] (0xc00097b130) Go away received\nI0519 00:23:07.273854 2376 log.go:172] (0xc00097b130) (0xc00098e1e0) Stream removed, broadcasting: 1\nI0519 00:23:07.273872 2376 log.go:172] (0xc00097b130) (0xc000472280) Stream removed, broadcasting: 3\nI0519 00:23:07.273885 2376 log.go:172] (0xc00097b130) (0xc0003badc0) Stream removed, broadcasting: 5\n" May 19 00:23:07.279: INFO: stdout: "\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9\naffinity-clusterip-transition-xkck9" May 19 00:23:07.279: INFO: Received response from host: May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Received response from host: affinity-clusterip-transition-xkck9 May 19 00:23:07.279: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9770, will wait for the garbage collector to delete the pods May 19 00:23:07.377: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.563785ms May 19 00:23:08.077: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 700.181005ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:15.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9770" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.467 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":118,"skipped":2106,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:15.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:23:15.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935" in namespace "projected-9002" to be "Succeeded or Failed" May 19 00:23:15.391: INFO: Pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935": Phase="Pending", Reason="", readiness=false. Elapsed: 12.155819ms May 19 00:23:17.469: INFO: Pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090325347s May 19 00:23:19.474: INFO: Pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935": Phase="Running", Reason="", readiness=true. Elapsed: 4.094965485s May 19 00:23:21.478: INFO: Pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099358283s STEP: Saw pod success May 19 00:23:21.478: INFO: Pod "downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935" satisfied condition "Succeeded or Failed" May 19 00:23:21.482: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935 container client-container: STEP: delete the pod May 19 00:23:21.519: INFO: Waiting for pod downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935 to disappear May 19 00:23:21.531: INFO: Pod downwardapi-volume-2f59cc66-2943-4d46-aabd-dedf29c13935 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:21.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9002" for this suite. • [SLOW TEST:6.236 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":2111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:21.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:23:21.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7805' May 19 00:23:21.912: INFO: stderr: "" May 19 00:23:21.912: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 19 00:23:21.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7805' May 19 00:23:22.212: INFO: stderr: "" May 19 00:23:22.212: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 19 00:23:23.270: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:23:23.270: INFO: Found 0 / 1 May 19 00:23:24.216: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:23:24.216: INFO: Found 0 / 1 May 19 00:23:25.217: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:23:25.217: INFO: Found 0 / 1 May 19 00:23:26.216: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:23:26.216: INFO: Found 1 / 1 May 19 00:23:26.216: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 19 00:23:26.218: INFO: Selector matched 1 pods for map[app:agnhost] May 19 00:23:26.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 19 00:23:26.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-76jpc --namespace=kubectl-7805' May 19 00:23:26.340: INFO: stderr: "" May 19 00:23:26.340: INFO: stdout: "Name: agnhost-master-76jpc\nNamespace: kubectl-7805\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 19 May 2020 00:23:21 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.146\nIPs:\n IP: 10.244.1.146\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://01d16162aff9639796a6e1b6beba61c66070e83da3af5f0aae5b62eefee29b95\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 19 May 2020 00:23:24 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ghsp5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ghsp5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ghsp5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-7805/agnhost-master-76jpc to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 2s kubelet, latest-worker Started container agnhost-master\n" May 19 00:23:26.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7805' May 19 00:23:26.520: INFO: stderr: "" May 19 00:23:26.520: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7805\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-76jpc\n" May 19 00:23:26.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7805' May 19 00:23:26.646: INFO: stderr: "" May 19 00:23:26.646: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-7805\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.106.147.2\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.146:6379\nSession Affinity: None\nEvents: \n" May 19 00:23:26.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 19 00:23:26.782: INFO: stderr: "" May 19 00:23:26.782: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 19 May 2020 00:23:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 19 May 2020 00:18:35 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 19 May 2020 00:18:35 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 19 May 2020 00:18:35 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 19 May 2020 00:18:35 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 19 00:23:26.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-7805' May 19 00:23:26.899: INFO: stderr: "" May 19 00:23:26.899: INFO: stdout: "Name: kubectl-7805\nLabels: e2e-framework=kubectl\n e2e-run=f8ad90c3-e60f-41ce-b39c-8d0cd27f60aa\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:26.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7805" for this suite. • [SLOW TEST:5.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":120,"skipped":2161,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:26.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:27.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8858" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":121,"skipped":2165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:27.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:23:27.868: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:23:29.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:23:31.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444608, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725444607, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:23:34.937: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:47.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1051" for this suite. STEP: Destroying namespace "webhook-1051-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.186 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":122,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:47.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 19 00:23:47.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 19 00:23:47.551: INFO: stderr: "" May 19 00:23:47.552: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:23:47.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5703" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":123,"skipped":2243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:23:47.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 19 00:23:47.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-8220 -- logs-generator --log-lines-total 100 --run-duration 20s' May 19 00:23:47.827: INFO: stderr: "" May 19 00:23:47.827: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 19 00:23:47.827: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 19 00:23:47.828: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8220" to be "running and ready, or succeeded" May 19 00:23:48.014: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 186.229888ms May 19 00:23:50.018: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190826878s May 19 00:23:52.023: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.195270454s May 19 00:23:52.023: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 19 00:23:52.023: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 19 00:23:52.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220' May 19 00:23:52.165: INFO: stderr: "" May 19 00:23:52.165: INFO: stdout: "I0519 00:23:50.720092 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/k2b 208\nI0519 00:23:50.920195 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/x6gf 326\nI0519 00:23:51.120412 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2w9c 292\nI0519 00:23:51.320274 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/ff5 468\nI0519 00:23:51.520232 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jwz 543\nI0519 00:23:51.720242 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/zllz 472\nI0519 00:23:51.920325 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/n4j 548\nI0519 00:23:52.120245 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/2vw 271\n" STEP: limiting log lines May 19 00:23:52.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220 --tail=1' May 19 00:23:52.343: INFO: stderr: "" May 19 00:23:52.343: INFO: stdout: "I0519 00:23:52.320247 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/57f 488\n" May 19 00:23:52.343: INFO: got output "I0519 00:23:52.320247 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/57f 488\n" STEP: limiting log bytes May 19 00:23:52.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220 --limit-bytes=1' May 19 00:23:52.449: INFO: stderr: "" May 19 00:23:52.449: INFO: stdout: "I" May 19 00:23:52.449: INFO: got output "I" STEP: exposing timestamps May 19 00:23:52.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220 --tail=1 --timestamps' May 19 00:23:52.590: INFO: stderr: "" May 19 00:23:52.590: INFO: stdout: "2020-05-19T00:23:52.520445935Z I0519 00:23:52.520276 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mhsn 289\n" May 19 00:23:52.590: INFO: got output "2020-05-19T00:23:52.520445935Z I0519 00:23:52.520276 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mhsn 289\n" STEP: restricting to a time range May 19 00:23:55.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220 --since=1s' May 19 00:23:55.209: INFO: stderr: "" May 19 00:23:55.210: INFO: stdout: "I0519 00:23:54.320250 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/wpgk 550\nI0519 00:23:54.520301 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/69jw 468\nI0519 00:23:54.720268 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/mx94 540\nI0519 00:23:54.920254 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/v5s 339\nI0519 00:23:55.120231 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/4wz9 210\n" May 19 00:23:55.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8220 --since=24h' May 19 00:23:55.323: INFO: stderr: "" May 19 00:23:55.323: INFO: stdout: "I0519 00:23:50.720092 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/k2b 208\nI0519 00:23:50.920195 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/x6gf 326\nI0519 00:23:51.120412 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2w9c 292\nI0519 00:23:51.320274 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/ff5 468\nI0519 00:23:51.520232 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/jwz 543\nI0519 00:23:51.720242 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/zllz 472\nI0519 00:23:51.920325 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/n4j 548\nI0519 00:23:52.120245 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/2vw 271\nI0519 00:23:52.320247 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/57f 488\nI0519 00:23:52.520276 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/mhsn 289\nI0519 00:23:52.720290 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/njh 586\nI0519 00:23:52.920232 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/sshl 460\nI0519 00:23:53.120224 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/6qs 445\nI0519 00:23:53.320246 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/h8g 529\nI0519 00:23:53.520219 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/b7xl 299\nI0519 00:23:53.720256 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/m2x2 286\nI0519 00:23:53.920273 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/4qn 522\nI0519 00:23:54.120277 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4jts 448\nI0519 00:23:54.320250 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/wpgk 550\nI0519 00:23:54.520301 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/69jw 468\nI0519 00:23:54.720268 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/mx94 540\nI0519 00:23:54.920254 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/v5s 339\nI0519 00:23:55.120231 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/4wz9 210\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 19 00:23:55.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8220' May 19 00:24:04.849: INFO: stderr: "" May 19 00:24:04.850: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:04.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8220" for this suite. • [SLOW TEST:17.310 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":124,"skipped":2272,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:04.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-91d547f0-b68c-4e7e-b79d-1f4068297840 STEP: Creating a pod to test consume configMaps May 19 00:24:04.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14" in namespace "projected-975" to be "Succeeded or Failed" May 19 00:24:04.971: INFO: Pod "pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14": Phase="Pending", Reason="", readiness=false. Elapsed: 51.512436ms May 19 00:24:07.187: INFO: Pod "pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267611625s May 19 00:24:09.190: INFO: Pod "pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.271117751s STEP: Saw pod success May 19 00:24:09.190: INFO: Pod "pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14" satisfied condition "Succeeded or Failed" May 19 00:24:09.193: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14 container projected-configmap-volume-test: STEP: delete the pod May 19 00:24:09.260: INFO: Waiting for pod pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14 to disappear May 19 00:24:09.269: INFO: Pod pod-projected-configmaps-0d5c78a5-319a-439d-8ddc-22c03a5e3a14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:09.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-975" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":125,"skipped":2283,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:09.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 19 00:24:09.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4596' May 19 00:24:09.671: INFO: stderr: "" May 19 00:24:09.671: INFO: stdout: "pod/pause created\n" May 19 00:24:09.671: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 19 00:24:09.671: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4596" to be "running and ready" May 19 00:24:09.731: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 59.893402ms May 19 00:24:11.851: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179575921s May 19 00:24:13.854: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.183224478s May 19 00:24:13.855: INFO: Pod "pause" satisfied condition "running and ready" May 19 00:24:13.855: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 19 00:24:13.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4596' May 19 00:24:13.965: INFO: stderr: "" May 19 00:24:13.965: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 19 00:24:13.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4596' May 19 00:24:14.078: INFO: stderr: "" May 19 00:24:14.078: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 19 00:24:14.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4596' May 19 00:24:14.195: INFO: stderr: "" May 19 00:24:14.195: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 19 00:24:14.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4596' May 19 00:24:14.323: INFO: stderr: "" May 19 00:24:14.323: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 19 00:24:14.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4596' May 19 00:24:14.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 00:24:14.533: INFO: stdout: "pod \"pause\" force deleted\n" May 19 00:24:14.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4596' May 19 00:24:14.885: INFO: stderr: "No resources found in kubectl-4596 namespace.\n" May 19 00:24:14.885: INFO: stdout: "" May 19 00:24:14.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4596 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 19 00:24:15.024: INFO: stderr: "" May 19 00:24:15.024: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:15.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4596" for this suite. • [SLOW TEST:5.768 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":126,"skipped":2294,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:15.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 19 00:24:15.101: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:23.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5896" for this suite. • [SLOW TEST:8.474 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":127,"skipped":2327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:23.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:24:23.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24" in namespace "downward-api-7632" to be "Succeeded or Failed" May 19 00:24:23.618: INFO: Pod "downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24": Phase="Pending", Reason="", readiness=false. Elapsed: 3.385458ms May 19 00:24:25.622: INFO: Pod "downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007431152s May 19 00:24:27.626: INFO: Pod "downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011712981s STEP: Saw pod success May 19 00:24:27.626: INFO: Pod "downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24" satisfied condition "Succeeded or Failed" May 19 00:24:27.629: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24 container client-container: STEP: delete the pod May 19 00:24:27.679: INFO: Waiting for pod downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24 to disappear May 19 00:24:27.689: INFO: Pod downwardapi-volume-a9f4253b-3284-498d-aee3-58f656742d24 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:27.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7632" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2350,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:27.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-79a4d284-df18-42fa-8097-cde6c3895ab0 STEP: Creating secret with name s-test-opt-upd-fb271189-1da7-4ebd-85ab-61f8bae36130 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-79a4d284-df18-42fa-8097-cde6c3895ab0 STEP: Updating secret s-test-opt-upd-fb271189-1da7-4ebd-85ab-61f8bae36130 STEP: Creating secret with name s-test-opt-create-1e8f5808-2546-43f5-aacd-935ac9ae0cca STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:36.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2118" for this suite. • [SLOW TEST:8.311 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":129,"skipped":2352,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:36.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 19 00:24:36.158: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:36.163: INFO: Number of nodes with available pods: 0 May 19 00:24:36.163: INFO: Node latest-worker is running more than one daemon pod May 19 00:24:37.176: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:37.180: INFO: Number of nodes with available pods: 0 May 19 00:24:37.180: INFO: Node latest-worker is running more than one daemon pod May 19 00:24:38.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:38.171: INFO: Number of nodes with available pods: 0 May 19 00:24:38.171: INFO: Node latest-worker is running more than one daemon pod May 19 00:24:39.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:39.171: INFO: Number of nodes with available pods: 0 May 19 00:24:39.171: INFO: Node latest-worker is running more than one daemon pod May 19 00:24:40.168: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:40.172: INFO: Number of nodes with available pods: 0 May 19 00:24:40.172: INFO: Node latest-worker is running more than one daemon pod May 19 00:24:41.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:41.174: INFO: Number of nodes with available pods: 2 May 19 00:24:41.174: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 19 00:24:41.254: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:41.261: INFO: Number of nodes with available pods: 1 May 19 00:24:41.261: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:24:42.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:42.307: INFO: Number of nodes with available pods: 1 May 19 00:24:42.307: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:24:43.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:43.271: INFO: Number of nodes with available pods: 1 May 19 00:24:43.271: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:24:44.296: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:44.300: INFO: Number of nodes with available pods: 1 May 19 00:24:44.300: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:24:45.266: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:45.269: INFO: Number of nodes with available pods: 1 May 19 00:24:45.269: INFO: Node latest-worker2 is running more than one daemon pod May 19 00:24:46.266: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:24:46.276: INFO: Number of nodes with available pods: 2 May 19 00:24:46.276: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2956, will wait for the garbage collector to delete the pods May 19 00:24:46.344: INFO: Deleting DaemonSet.extensions daemon-set took: 11.83382ms May 19 00:24:46.644: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.221194ms May 19 00:24:50.049: INFO: Number of nodes with available pods: 0 May 19 00:24:50.049: INFO: Number of running nodes: 0, number of available pods: 0 May 19 00:24:50.052: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2956/daemonsets","resourceVersion":"5819698"},"items":null} May 19 00:24:50.054: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2956/pods","resourceVersion":"5819698"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:24:50.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2956" for this suite. • [SLOW TEST:14.058 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":130,"skipped":2357,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:24:50.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8811 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8811 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8811 May 19 00:24:50.261: INFO: Found 0 stateful pods, waiting for 1 May 19 00:25:00.266: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 19 00:25:00.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 00:25:00.696: INFO: stderr: "I0519 00:25:00.540742 2876 log.go:172] (0xc00003a2c0) (0xc0005f8fa0) Create stream\nI0519 00:25:00.540794 2876 log.go:172] (0xc00003a2c0) (0xc0005f8fa0) Stream added, broadcasting: 1\nI0519 00:25:00.543683 2876 log.go:172] (0xc00003a2c0) Reply frame received for 1\nI0519 00:25:00.543741 2876 log.go:172] (0xc00003a2c0) (0xc000488d20) Create stream\nI0519 00:25:00.543758 2876 log.go:172] (0xc00003a2c0) (0xc000488d20) Stream added, broadcasting: 3\nI0519 00:25:00.544754 2876 log.go:172] (0xc00003a2c0) Reply frame received for 3\nI0519 00:25:00.544798 2876 log.go:172] (0xc00003a2c0) (0xc00024fe00) Create stream\nI0519 00:25:00.544817 2876 log.go:172] (0xc00003a2c0) (0xc00024fe00) Stream added, broadcasting: 5\nI0519 00:25:00.546045 2876 log.go:172] (0xc00003a2c0) Reply frame received for 5\nI0519 00:25:00.651409 2876 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0519 00:25:00.651451 2876 log.go:172] (0xc00024fe00) (5) Data frame handling\nI0519 00:25:00.651483 2876 log.go:172] (0xc00024fe00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 00:25:00.685656 2876 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0519 00:25:00.685692 2876 log.go:172] (0xc000488d20) (3) Data frame handling\nI0519 00:25:00.685719 2876 log.go:172] (0xc000488d20) (3) Data frame sent\nI0519 00:25:00.686299 2876 log.go:172] (0xc00003a2c0) Data frame received for 5\nI0519 00:25:00.686331 2876 log.go:172] (0xc00024fe00) (5) Data frame handling\nI0519 00:25:00.686374 2876 log.go:172] (0xc00003a2c0) Data frame received for 3\nI0519 00:25:00.686392 2876 log.go:172] (0xc000488d20) (3) Data frame handling\nI0519 00:25:00.688466 2876 log.go:172] (0xc00003a2c0) Data frame received for 1\nI0519 00:25:00.688504 2876 log.go:172] (0xc0005f8fa0) (1) Data frame handling\nI0519 00:25:00.688537 2876 log.go:172] (0xc0005f8fa0) (1) Data frame sent\nI0519 00:25:00.688562 2876 log.go:172] (0xc00003a2c0) (0xc0005f8fa0) Stream removed, broadcasting: 1\nI0519 00:25:00.688581 2876 log.go:172] (0xc00003a2c0) Go away received\nI0519 00:25:00.689091 2876 log.go:172] (0xc00003a2c0) (0xc0005f8fa0) Stream removed, broadcasting: 1\nI0519 00:25:00.689321 2876 log.go:172] (0xc00003a2c0) (0xc000488d20) Stream removed, broadcasting: 3\nI0519 00:25:00.689417 2876 log.go:172] (0xc00003a2c0) (0xc00024fe00) Stream removed, broadcasting: 5\n" May 19 00:25:00.696: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 00:25:00.696: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 00:25:00.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 19 00:25:10.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 00:25:10.703: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:25:10.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999962s May 19 00:25:11.764: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.959113436s May 19 00:25:12.768: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.945851464s May 19 00:25:13.773: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.94186905s May 19 00:25:14.778: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.936566302s May 19 00:25:15.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.931915742s May 19 00:25:16.786: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.927244404s May 19 00:25:17.790: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.923027373s May 19 00:25:18.796: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.91918263s May 19 00:25:19.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 913.776393ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8811 May 19 00:25:20.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 00:25:21.044: INFO: stderr: "I0519 00:25:20.942357 2898 log.go:172] (0xc0009c7760) (0xc000b0a500) Create stream\nI0519 00:25:20.942442 2898 log.go:172] (0xc0009c7760) (0xc000b0a500) Stream added, broadcasting: 1\nI0519 00:25:20.947950 2898 log.go:172] (0xc0009c7760) Reply frame received for 1\nI0519 00:25:20.947993 2898 log.go:172] (0xc0009c7760) (0xc00063e5a0) Create stream\nI0519 00:25:20.948003 2898 log.go:172] (0xc0009c7760) (0xc00063e5a0) Stream added, broadcasting: 3\nI0519 00:25:20.948957 2898 log.go:172] (0xc0009c7760) Reply frame received for 3\nI0519 00:25:20.949015 2898 log.go:172] (0xc0009c7760) (0xc00050a280) Create stream\nI0519 00:25:20.949030 2898 log.go:172] (0xc0009c7760) (0xc00050a280) Stream added, broadcasting: 5\nI0519 00:25:20.950131 2898 log.go:172] (0xc0009c7760) Reply frame received for 5\nI0519 00:25:21.037001 2898 log.go:172] (0xc0009c7760) Data frame received for 3\nI0519 00:25:21.037057 2898 log.go:172] (0xc00063e5a0) (3) Data frame handling\nI0519 00:25:21.037072 2898 log.go:172] (0xc00063e5a0) (3) Data frame sent\nI0519 00:25:21.037083 2898 log.go:172] (0xc0009c7760) Data frame received for 3\nI0519 00:25:21.037092 2898 log.go:172] (0xc00063e5a0) (3) Data frame handling\nI0519 00:25:21.037256 2898 log.go:172] (0xc0009c7760) Data frame received for 5\nI0519 00:25:21.037271 2898 log.go:172] (0xc00050a280) (5) Data frame handling\nI0519 00:25:21.037281 2898 log.go:172] (0xc00050a280) (5) Data frame sent\nI0519 00:25:21.037287 2898 log.go:172] (0xc0009c7760) Data frame received for 5\nI0519 00:25:21.037292 2898 log.go:172] (0xc00050a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 00:25:21.038861 2898 log.go:172] (0xc0009c7760) Data frame received for 1\nI0519 00:25:21.038872 2898 log.go:172] (0xc000b0a500) (1) Data frame handling\nI0519 00:25:21.038885 2898 log.go:172] (0xc000b0a500) (1) Data frame sent\nI0519 00:25:21.038896 2898 log.go:172] (0xc0009c7760) (0xc000b0a500) Stream removed, broadcasting: 1\nI0519 00:25:21.038905 2898 log.go:172] (0xc0009c7760) Go away received\nI0519 00:25:21.039247 2898 log.go:172] (0xc0009c7760) (0xc000b0a500) Stream removed, broadcasting: 1\nI0519 00:25:21.039271 2898 log.go:172] (0xc0009c7760) (0xc00063e5a0) Stream removed, broadcasting: 3\nI0519 00:25:21.039287 2898 log.go:172] (0xc0009c7760) (0xc00050a280) Stream removed, broadcasting: 5\n" May 19 00:25:21.045: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 00:25:21.045: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 00:25:21.048: INFO: Found 1 stateful pods, waiting for 3 May 19 00:25:31.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 19 00:25:31.054: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 19 00:25:31.054: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 19 00:25:31.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 00:25:31.300: INFO: stderr: "I0519 00:25:31.198475 2918 log.go:172] (0xc0009d0f20) (0xc000a9c640) Create stream\nI0519 00:25:31.198548 2918 log.go:172] (0xc0009d0f20) (0xc000a9c640) Stream added, broadcasting: 1\nI0519 00:25:31.204281 2918 log.go:172] (0xc0009d0f20) Reply frame received for 1\nI0519 00:25:31.204347 2918 log.go:172] (0xc0009d0f20) (0xc0004d8500) Create stream\nI0519 00:25:31.204363 2918 log.go:172] (0xc0009d0f20) (0xc0004d8500) Stream added, broadcasting: 3\nI0519 00:25:31.205318 2918 log.go:172] (0xc0009d0f20) Reply frame received for 3\nI0519 00:25:31.205348 2918 log.go:172] (0xc0009d0f20) (0xc0004701e0) Create stream\nI0519 00:25:31.205361 2918 log.go:172] (0xc0009d0f20) (0xc0004701e0) Stream added, broadcasting: 5\nI0519 00:25:31.206154 2918 log.go:172] (0xc0009d0f20) Reply frame received for 5\nI0519 00:25:31.292958 2918 log.go:172] (0xc0009d0f20) Data frame received for 5\nI0519 00:25:31.292997 2918 log.go:172] (0xc0004701e0) (5) Data frame handling\nI0519 00:25:31.293011 2918 log.go:172] (0xc0004701e0) (5) Data frame sent\nI0519 00:25:31.293022 2918 log.go:172] (0xc0009d0f20) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 00:25:31.293035 2918 log.go:172] (0xc0004701e0) (5) Data frame handling\nI0519 00:25:31.293078 2918 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0519 00:25:31.293330 2918 log.go:172] (0xc0004d8500) (3) Data frame handling\nI0519 00:25:31.293441 2918 log.go:172] (0xc0004d8500) (3) Data frame sent\nI0519 00:25:31.293473 2918 log.go:172] (0xc0009d0f20) Data frame received for 3\nI0519 00:25:31.293516 2918 log.go:172] (0xc0004d8500) (3) Data frame handling\nI0519 00:25:31.295091 2918 log.go:172] (0xc0009d0f20) Data frame received for 1\nI0519 00:25:31.295113 2918 log.go:172] (0xc000a9c640) (1) Data frame handling\nI0519 00:25:31.295125 2918 log.go:172] (0xc000a9c640) (1) Data frame sent\nI0519 00:25:31.295144 2918 log.go:172] (0xc0009d0f20) (0xc000a9c640) Stream removed, broadcasting: 1\nI0519 00:25:31.295169 2918 log.go:172] (0xc0009d0f20) Go away received\nI0519 00:25:31.295797 2918 log.go:172] (0xc0009d0f20) (0xc000a9c640) Stream removed, broadcasting: 1\nI0519 00:25:31.295841 2918 log.go:172] (0xc0009d0f20) (0xc0004d8500) Stream removed, broadcasting: 3\nI0519 00:25:31.295855 2918 log.go:172] (0xc0009d0f20) (0xc0004701e0) Stream removed, broadcasting: 5\n" May 19 00:25:31.300: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 00:25:31.300: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 00:25:31.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 00:25:31.551: INFO: stderr: "I0519 00:25:31.434730 2938 log.go:172] (0xc0000e4d10) (0xc000bb8320) Create stream\nI0519 00:25:31.434788 2938 log.go:172] (0xc0000e4d10) (0xc000bb8320) Stream added, broadcasting: 1\nI0519 00:25:31.440743 2938 log.go:172] (0xc0000e4d10) Reply frame received for 1\nI0519 00:25:31.440777 2938 log.go:172] (0xc0000e4d10) (0xc0006e8640) Create stream\nI0519 00:25:31.440788 2938 log.go:172] (0xc0000e4d10) (0xc0006e8640) Stream added, broadcasting: 3\nI0519 00:25:31.442110 2938 log.go:172] (0xc0000e4d10) Reply frame received for 3\nI0519 00:25:31.442148 2938 log.go:172] (0xc0000e4d10) (0xc0006e8fa0) Create stream\nI0519 00:25:31.442161 2938 log.go:172] (0xc0000e4d10) (0xc0006e8fa0) Stream added, broadcasting: 5\nI0519 00:25:31.443092 2938 log.go:172] (0xc0000e4d10) Reply frame received for 5\nI0519 00:25:31.514536 2938 log.go:172] (0xc0000e4d10) Data frame received for 5\nI0519 00:25:31.514561 2938 log.go:172] (0xc0006e8fa0) (5) Data frame handling\nI0519 00:25:31.514576 2938 log.go:172] (0xc0006e8fa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 00:25:31.542564 2938 log.go:172] (0xc0000e4d10) Data frame received for 3\nI0519 00:25:31.542615 2938 log.go:172] (0xc0006e8640) (3) Data frame handling\nI0519 00:25:31.542657 2938 log.go:172] (0xc0006e8640) (3) Data frame sent\nI0519 00:25:31.542676 2938 log.go:172] (0xc0000e4d10) Data frame received for 3\nI0519 00:25:31.542686 2938 log.go:172] (0xc0006e8640) (3) Data frame handling\nI0519 00:25:31.542917 2938 log.go:172] (0xc0000e4d10) Data frame received for 5\nI0519 00:25:31.542947 2938 log.go:172] (0xc0006e8fa0) (5) Data frame handling\nI0519 00:25:31.545609 2938 log.go:172] (0xc0000e4d10) Data frame received for 1\nI0519 00:25:31.545637 2938 log.go:172] (0xc000bb8320) (1) Data frame handling\nI0519 00:25:31.545665 2938 log.go:172] (0xc000bb8320) (1) Data frame sent\nI0519 00:25:31.545694 2938 log.go:172] (0xc0000e4d10) (0xc000bb8320) Stream removed, broadcasting: 1\nI0519 00:25:31.545723 2938 log.go:172] (0xc0000e4d10) Go away received\nI0519 00:25:31.546143 2938 log.go:172] (0xc0000e4d10) (0xc000bb8320) Stream removed, broadcasting: 1\nI0519 00:25:31.546171 2938 log.go:172] (0xc0000e4d10) (0xc0006e8640) Stream removed, broadcasting: 3\nI0519 00:25:31.546184 2938 log.go:172] (0xc0000e4d10) (0xc0006e8fa0) Stream removed, broadcasting: 5\n" May 19 00:25:31.551: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 00:25:31.551: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 00:25:31.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 00:25:31.802: INFO: stderr: "I0519 00:25:31.678102 2960 log.go:172] (0xc000a41130) (0xc0006da5a0) Create stream\nI0519 00:25:31.678156 2960 log.go:172] (0xc000a41130) (0xc0006da5a0) Stream added, broadcasting: 1\nI0519 00:25:31.687095 2960 log.go:172] (0xc000a41130) Reply frame received for 1\nI0519 00:25:31.687140 2960 log.go:172] (0xc000a41130) (0xc0006d14a0) Create stream\nI0519 00:25:31.687150 2960 log.go:172] (0xc000a41130) (0xc0006d14a0) Stream added, broadcasting: 3\nI0519 00:25:31.688329 2960 log.go:172] (0xc000a41130) Reply frame received for 3\nI0519 00:25:31.688356 2960 log.go:172] (0xc000a41130) (0xc0006c4c80) Create stream\nI0519 00:25:31.688367 2960 log.go:172] (0xc000a41130) (0xc0006c4c80) Stream added, broadcasting: 5\nI0519 00:25:31.692064 2960 log.go:172] (0xc000a41130) Reply frame received for 5\nI0519 00:25:31.766342 2960 log.go:172] (0xc000a41130) Data frame received for 5\nI0519 00:25:31.766370 2960 log.go:172] (0xc0006c4c80) (5) Data frame handling\nI0519 00:25:31.766391 2960 log.go:172] (0xc0006c4c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 00:25:31.794237 2960 log.go:172] (0xc000a41130) Data frame received for 3\nI0519 00:25:31.794282 2960 log.go:172] (0xc0006d14a0) (3) Data frame handling\nI0519 00:25:31.794310 2960 log.go:172] (0xc0006d14a0) (3) Data frame sent\nI0519 00:25:31.794463 2960 log.go:172] (0xc000a41130) Data frame received for 5\nI0519 00:25:31.794492 2960 log.go:172] (0xc0006c4c80) (5) Data frame handling\nI0519 00:25:31.794672 2960 log.go:172] (0xc000a41130) Data frame received for 3\nI0519 00:25:31.794702 2960 log.go:172] (0xc0006d14a0) (3) Data frame handling\nI0519 00:25:31.796720 2960 log.go:172] (0xc000a41130) Data frame received for 1\nI0519 00:25:31.796749 2960 log.go:172] (0xc0006da5a0) (1) Data frame handling\nI0519 00:25:31.796786 2960 log.go:172] (0xc0006da5a0) (1) Data frame sent\nI0519 00:25:31.796814 2960 log.go:172] (0xc000a41130) (0xc0006da5a0) Stream removed, broadcasting: 1\nI0519 00:25:31.796862 2960 log.go:172] (0xc000a41130) Go away received\nI0519 00:25:31.797459 2960 log.go:172] (0xc000a41130) (0xc0006da5a0) Stream removed, broadcasting: 1\nI0519 00:25:31.797480 2960 log.go:172] (0xc000a41130) (0xc0006d14a0) Stream removed, broadcasting: 3\nI0519 00:25:31.797491 2960 log.go:172] (0xc000a41130) (0xc0006c4c80) Stream removed, broadcasting: 5\n" May 19 00:25:31.802: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 00:25:31.803: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 00:25:31.803: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:25:31.805: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 19 00:25:41.820: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 00:25:41.820: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 19 00:25:41.820: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 19 00:25:41.854: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999468s May 19 00:25:42.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972370535s May 19 00:25:43.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.966542865s May 19 00:25:44.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.960633186s May 19 00:25:45.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956021139s May 19 00:25:46.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.950585453s May 19 00:25:47.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.944476105s May 19 00:25:48.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939963453s May 19 00:25:49.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934034311s May 19 00:25:50.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.551863ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8811 May 19 00:25:51.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 00:25:52.172: INFO: stderr: "I0519 00:25:52.073326 2979 log.go:172] (0xc000ac1130) (0xc000864f00) Create stream\nI0519 00:25:52.073384 2979 log.go:172] (0xc000ac1130) (0xc000864f00) Stream added, broadcasting: 1\nI0519 00:25:52.077337 2979 log.go:172] (0xc000ac1130) Reply frame received for 1\nI0519 00:25:52.077382 2979 log.go:172] (0xc000ac1130) (0xc0005c6280) Create stream\nI0519 00:25:52.077393 2979 log.go:172] (0xc000ac1130) (0xc0005c6280) Stream added, broadcasting: 3\nI0519 00:25:52.078284 2979 log.go:172] (0xc000ac1130) Reply frame received for 3\nI0519 00:25:52.078322 2979 log.go:172] (0xc000ac1130) (0xc0005581e0) Create stream\nI0519 00:25:52.078339 2979 log.go:172] (0xc000ac1130) (0xc0005581e0) Stream added, broadcasting: 5\nI0519 00:25:52.079168 2979 log.go:172] (0xc000ac1130) Reply frame received for 5\nI0519 00:25:52.164738 2979 log.go:172] (0xc000ac1130) Data frame received for 3\nI0519 00:25:52.164768 2979 log.go:172] (0xc0005c6280) (3) Data frame handling\nI0519 00:25:52.164794 2979 log.go:172] (0xc0005c6280) (3) Data frame sent\nI0519 00:25:52.164805 2979 log.go:172] (0xc000ac1130) Data frame received for 3\nI0519 00:25:52.164813 2979 log.go:172] (0xc0005c6280) (3) Data frame handling\nI0519 00:25:52.165032 2979 log.go:172] (0xc000ac1130) Data frame received for 5\nI0519 00:25:52.165058 2979 log.go:172] (0xc0005581e0) (5) Data frame handling\nI0519 00:25:52.165072 2979 log.go:172] (0xc0005581e0) (5) Data frame sent\nI0519 00:25:52.165085 2979 log.go:172] (0xc000ac1130) Data frame received for 5\nI0519 00:25:52.165096 2979 log.go:172] (0xc0005581e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 00:25:52.166860 2979 log.go:172] (0xc000ac1130) Data frame received for 1\nI0519 00:25:52.166879 2979 log.go:172] (0xc000864f00) (1) Data frame handling\nI0519 00:25:52.166902 2979 log.go:172] (0xc000864f00) (1) Data frame sent\nI0519 00:25:52.166927 2979 log.go:172] (0xc000ac1130) (0xc000864f00) Stream removed, broadcasting: 1\nI0519 00:25:52.167077 2979 log.go:172] (0xc000ac1130) Go away received\nI0519 00:25:52.167235 2979 log.go:172] (0xc000ac1130) (0xc000864f00) Stream removed, broadcasting: 1\nI0519 00:25:52.167250 2979 log.go:172] (0xc000ac1130) (0xc0005c6280) Stream removed, broadcasting: 3\nI0519 00:25:52.167257 2979 log.go:172] (0xc000ac1130) (0xc0005581e0) Stream removed, broadcasting: 5\n" May 19 00:25:52.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 00:25:52.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 00:25:52.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 00:25:52.381: INFO: stderr: "I0519 00:25:52.303559 2999 log.go:172] (0xc000b5afd0) (0xc000a32500) Create stream\nI0519 00:25:52.303612 2999 log.go:172] (0xc000b5afd0) (0xc000a32500) Stream added, broadcasting: 1\nI0519 00:25:52.306001 2999 log.go:172] (0xc000b5afd0) Reply frame received for 1\nI0519 00:25:52.306034 2999 log.go:172] (0xc000b5afd0) (0xc000afe280) Create stream\nI0519 00:25:52.306050 2999 log.go:172] (0xc000b5afd0) (0xc000afe280) Stream added, broadcasting: 3\nI0519 00:25:52.306993 2999 log.go:172] (0xc000b5afd0) Reply frame received for 3\nI0519 00:25:52.307032 2999 log.go:172] (0xc000b5afd0) (0xc0005752c0) Create stream\nI0519 00:25:52.307066 2999 log.go:172] (0xc000b5afd0) (0xc0005752c0) Stream added, broadcasting: 5\nI0519 00:25:52.307968 2999 log.go:172] (0xc000b5afd0) Reply frame received for 5\nI0519 00:25:52.373589 2999 log.go:172] (0xc000b5afd0) Data frame received for 3\nI0519 00:25:52.373638 2999 log.go:172] (0xc000afe280) (3) Data frame handling\nI0519 00:25:52.373661 2999 log.go:172] (0xc000afe280) (3) Data frame sent\nI0519 00:25:52.373677 2999 log.go:172] (0xc000b5afd0) Data frame received for 3\nI0519 00:25:52.373743 2999 log.go:172] (0xc000b5afd0) Data frame received for 5\nI0519 00:25:52.373811 2999 log.go:172] (0xc0005752c0) (5) Data frame handling\nI0519 00:25:52.373849 2999 log.go:172] (0xc0005752c0) (5) Data frame sent\nI0519 00:25:52.373869 2999 log.go:172] (0xc000b5afd0) Data frame received for 5\nI0519 00:25:52.373891 2999 log.go:172] (0xc0005752c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 00:25:52.373925 2999 log.go:172] (0xc000afe280) (3) Data frame handling\nI0519 00:25:52.375274 2999 log.go:172] (0xc000b5afd0) Data frame received for 1\nI0519 00:25:52.375344 2999 log.go:172] (0xc000a32500) (1) Data frame handling\nI0519 00:25:52.375380 2999 log.go:172] (0xc000a32500) (1) Data frame sent\nI0519 00:25:52.375543 2999 log.go:172] (0xc000b5afd0) (0xc000a32500) Stream removed, broadcasting: 1\nI0519 00:25:52.375602 2999 log.go:172] (0xc000b5afd0) Go away received\nI0519 00:25:52.375966 2999 log.go:172] (0xc000b5afd0) (0xc000a32500) Stream removed, broadcasting: 1\nI0519 00:25:52.375984 2999 log.go:172] (0xc000b5afd0) (0xc000afe280) Stream removed, broadcasting: 3\nI0519 00:25:52.375992 2999 log.go:172] (0xc000b5afd0) (0xc0005752c0) Stream removed, broadcasting: 5\n" May 19 00:25:52.381: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 00:25:52.381: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 00:25:52.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8811 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 00:25:52.591: INFO: stderr: "I0519 00:25:52.512306 3019 log.go:172] (0xc00003b600) (0xc0006d9c20) Create stream\nI0519 00:25:52.512364 3019 log.go:172] (0xc00003b600) (0xc0006d9c20) Stream added, broadcasting: 1\nI0519 00:25:52.518220 3019 log.go:172] (0xc00003b600) Reply frame received for 1\nI0519 00:25:52.518247 3019 log.go:172] (0xc00003b600) (0xc0006a6dc0) Create stream\nI0519 00:25:52.518254 3019 log.go:172] (0xc00003b600) (0xc0006a6dc0) Stream added, broadcasting: 3\nI0519 00:25:52.519055 3019 log.go:172] (0xc00003b600) Reply frame received for 3\nI0519 00:25:52.519089 3019 log.go:172] (0xc00003b600) (0xc000552140) Create stream\nI0519 00:25:52.519109 3019 log.go:172] (0xc00003b600) (0xc000552140) Stream added, broadcasting: 5\nI0519 00:25:52.519874 3019 log.go:172] (0xc00003b600) Reply frame received for 5\nI0519 00:25:52.585469 3019 log.go:172] (0xc00003b600) Data frame received for 3\nI0519 00:25:52.585503 3019 log.go:172] (0xc0006a6dc0) (3) Data frame handling\nI0519 00:25:52.585515 3019 log.go:172] (0xc0006a6dc0) (3) Data frame sent\nI0519 00:25:52.585525 3019 log.go:172] (0xc00003b600) Data frame received for 3\nI0519 00:25:52.585532 3019 log.go:172] (0xc0006a6dc0) (3) Data frame handling\nI0519 00:25:52.585562 3019 log.go:172] (0xc00003b600) Data frame received for 5\nI0519 00:25:52.585571 3019 log.go:172] (0xc000552140) (5) Data frame handling\nI0519 00:25:52.585586 3019 log.go:172] (0xc000552140) (5) Data frame sent\nI0519 00:25:52.585594 3019 log.go:172] (0xc00003b600) Data frame received for 5\nI0519 00:25:52.585600 3019 log.go:172] (0xc000552140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 00:25:52.587056 3019 log.go:172] (0xc00003b600) Data frame received for 1\nI0519 00:25:52.587069 3019 log.go:172] (0xc0006d9c20) (1) Data frame handling\nI0519 00:25:52.587079 3019 log.go:172] (0xc0006d9c20) (1) Data frame sent\nI0519 00:25:52.587158 3019 log.go:172] (0xc00003b600) (0xc0006d9c20) Stream removed, broadcasting: 1\nI0519 00:25:52.587181 3019 log.go:172] (0xc00003b600) Go away received\nI0519 00:25:52.587535 3019 log.go:172] (0xc00003b600) (0xc0006d9c20) Stream removed, broadcasting: 1\nI0519 00:25:52.587562 3019 log.go:172] (0xc00003b600) (0xc0006a6dc0) Stream removed, broadcasting: 3\nI0519 00:25:52.587573 3019 log.go:172] (0xc00003b600) (0xc000552140) Stream removed, broadcasting: 5\n" May 19 00:25:52.591: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 00:25:52.591: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 00:25:52.591: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 00:26:22.615: INFO: Deleting all statefulset in ns statefulset-8811 May 19 00:26:22.617: INFO: Scaling statefulset ss to 0 May 19 00:26:22.660: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:26:22.663: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:26:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8811" for this suite. • [SLOW TEST:92.620 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":131,"skipped":2359,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:26:22.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-52be17f8-d6f7-4ea6-b1a3-76032e6e19d3 STEP: Creating a pod to test consume configMaps May 19 00:26:22.821: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8" in namespace "projected-1510" to be "Succeeded or Failed" May 19 00:26:22.825: INFO: Pod "pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036105ms May 19 00:26:24.830: INFO: Pod "pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008685531s May 19 00:26:26.834: INFO: Pod "pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012846715s STEP: Saw pod success May 19 00:26:26.834: INFO: Pod "pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8" satisfied condition "Succeeded or Failed" May 19 00:26:26.837: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8 container projected-configmap-volume-test: STEP: delete the pod May 19 00:26:26.868: INFO: Waiting for pod pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8 to disappear May 19 00:26:26.884: INFO: Pod pod-projected-configmaps-32256a81-bf64-4f76-b7ea-64c8e52533f8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:26:26.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1510" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2374,"failed":0} ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:26:26.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:26:26.975: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7f5ca516-ce44-4264-a7c5-dcedb0f4286d" in namespace "security-context-test-5614" to be "Succeeded or Failed" May 19 00:26:27.000: INFO: Pod "alpine-nnp-false-7f5ca516-ce44-4264-a7c5-dcedb0f4286d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.15624ms May 19 00:26:29.004: INFO: Pod "alpine-nnp-false-7f5ca516-ce44-4264-a7c5-dcedb0f4286d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029655934s May 19 00:26:31.010: INFO: Pod "alpine-nnp-false-7f5ca516-ce44-4264-a7c5-dcedb0f4286d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035652936s May 19 00:26:31.010: INFO: Pod "alpine-nnp-false-7f5ca516-ce44-4264-a7c5-dcedb0f4286d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:26:31.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5614" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2374,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:26:31.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-m2sq STEP: Creating a pod to test atomic-volume-subpath May 19 00:26:31.328: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-m2sq" in namespace "subpath-3811" to be "Succeeded or Failed" May 19 00:26:31.370: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Pending", Reason="", readiness=false. Elapsed: 42.446575ms May 19 00:26:33.373: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045667389s May 19 00:26:35.377: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 4.049802199s May 19 00:26:37.381: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 6.053644237s May 19 00:26:39.385: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 8.057271096s May 19 00:26:41.389: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 10.061442677s May 19 00:26:43.394: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 12.066300513s May 19 00:26:45.399: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 14.070959404s May 19 00:26:47.403: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 16.075530041s May 19 00:26:49.411: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 18.083047736s May 19 00:26:51.415: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 20.087630342s May 19 00:26:53.419: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Running", Reason="", readiness=true. Elapsed: 22.091658445s May 19 00:26:55.424: INFO: Pod "pod-subpath-test-projected-m2sq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09671753s STEP: Saw pod success May 19 00:26:55.424: INFO: Pod "pod-subpath-test-projected-m2sq" satisfied condition "Succeeded or Failed" May 19 00:26:55.428: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-m2sq container test-container-subpath-projected-m2sq: STEP: delete the pod May 19 00:26:55.570: INFO: Waiting for pod pod-subpath-test-projected-m2sq to disappear May 19 00:26:55.580: INFO: Pod pod-subpath-test-projected-m2sq no longer exists STEP: Deleting pod pod-subpath-test-projected-m2sq May 19 00:26:55.580: INFO: Deleting pod "pod-subpath-test-projected-m2sq" in namespace "subpath-3811" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:26:55.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3811" for this suite. • [SLOW TEST:24.558 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":134,"skipped":2381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:26:55.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-3fd36980-00bc-4160-88a5-d1c1043003e9 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-3fd36980-00bc-4160-88a5-d1c1043003e9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:27:01.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1140" for this suite. • [SLOW TEST:6.181 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2425,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:27:01.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 19 00:27:01.837: INFO: Waiting up to 5m0s for pod "downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397" in namespace "downward-api-2505" to be "Succeeded or Failed" May 19 00:27:01.843: INFO: Pod "downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397": Phase="Pending", Reason="", readiness=false. Elapsed: 5.394195ms May 19 00:27:03.847: INFO: Pod "downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009949811s May 19 00:27:05.851: INFO: Pod "downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013956058s STEP: Saw pod success May 19 00:27:05.852: INFO: Pod "downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397" satisfied condition "Succeeded or Failed" May 19 00:27:05.855: INFO: Trying to get logs from node latest-worker2 pod downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397 container dapi-container: STEP: delete the pod May 19 00:27:05.932: INFO: Waiting for pod downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397 to disappear May 19 00:27:05.939: INFO: Pod downward-api-957bf1b2-3314-46bc-801e-5b2c39d10397 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:27:05.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2505" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2438,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:27:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-962.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-962.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:27:12.140: INFO: DNS probes using dns-962/dns-test-33ec89f7-89fc-40d6-a53a-19cd825b9f63 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:27:12.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-962" for this suite. • [SLOW TEST:6.321 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":137,"skipped":2450,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:27:12.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 in namespace container-probe-7116 May 19 00:27:16.732: INFO: Started pod liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 in namespace container-probe-7116 STEP: checking the pod's current state and verifying that restartCount is present May 19 00:27:16.735: INFO: Initial restart count of pod liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is 0 May 19 00:27:34.789: INFO: Restart count of pod container-probe-7116/liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is now 1 (18.054543794s elapsed) May 19 00:27:54.846: INFO: Restart count of pod container-probe-7116/liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is now 2 (38.111285677s elapsed) May 19 00:28:14.919: INFO: Restart count of pod container-probe-7116/liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is now 3 (58.184098695s elapsed) May 19 00:28:35.038: INFO: Restart count of pod container-probe-7116/liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is now 4 (1m18.30339406s elapsed) May 19 00:29:47.202: INFO: Restart count of pod container-probe-7116/liveness-799a5af4-c03f-4279-affa-ee242a4efcf4 is now 5 (2m30.467123944s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:29:47.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7116" for this suite. • [SLOW TEST:155.023 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2456,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:29:47.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0519 00:29:57.759954 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 00:29:57.760: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:29:57.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5836" for this suite. • [SLOW TEST:10.475 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":139,"skipped":2465,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:29:57.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 19 00:30:04.413: INFO: Successfully updated pod "adopt-release-257kb" STEP: Checking that the Job readopts the Pod May 19 00:30:04.413: INFO: Waiting up to 15m0s for pod "adopt-release-257kb" in namespace "job-5164" to be "adopted" May 19 00:30:04.434: INFO: Pod "adopt-release-257kb": Phase="Running", Reason="", readiness=true. Elapsed: 20.456646ms May 19 00:30:06.437: INFO: Pod "adopt-release-257kb": Phase="Running", Reason="", readiness=true. Elapsed: 2.023563775s May 19 00:30:06.437: INFO: Pod "adopt-release-257kb" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 19 00:30:06.947: INFO: Successfully updated pod "adopt-release-257kb" STEP: Checking that the Job releases the Pod May 19 00:30:06.947: INFO: Waiting up to 15m0s for pod "adopt-release-257kb" in namespace "job-5164" to be "released" May 19 00:30:06.986: INFO: Pod "adopt-release-257kb": Phase="Running", Reason="", readiness=true. Elapsed: 38.410836ms May 19 00:30:06.986: INFO: Pod "adopt-release-257kb" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5164" for this suite. • [SLOW TEST:9.309 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":140,"skipped":2465,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:07.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:30:07.145: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 19 00:30:10.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 create -f -' May 19 00:30:13.509: INFO: stderr: "" May 19 00:30:13.509: INFO: stdout: "e2e-test-crd-publish-openapi-8305-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 19 00:30:13.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 delete e2e-test-crd-publish-openapi-8305-crds test-foo' May 19 00:30:13.607: INFO: stderr: "" May 19 00:30:13.607: INFO: stdout: "e2e-test-crd-publish-openapi-8305-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 19 00:30:13.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 apply -f -' May 19 00:30:13.894: INFO: stderr: "" May 19 00:30:13.895: INFO: stdout: "e2e-test-crd-publish-openapi-8305-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 19 00:30:13.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 delete e2e-test-crd-publish-openapi-8305-crds test-foo' May 19 00:30:14.001: INFO: stderr: "" May 19 00:30:14.002: INFO: stdout: "e2e-test-crd-publish-openapi-8305-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 19 00:30:14.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 create -f -' May 19 00:30:14.256: INFO: rc: 1 May 19 00:30:14.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 apply -f -' May 19 00:30:14.497: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 19 00:30:14.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 create -f -' May 19 00:30:14.722: INFO: rc: 1 May 19 00:30:14.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1644 apply -f -' May 19 00:30:15.011: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 19 00:30:15.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8305-crds' May 19 00:30:15.307: INFO: stderr: "" May 19 00:30:15.307: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 19 00:30:15.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8305-crds.metadata' May 19 00:30:15.581: INFO: stderr: "" May 19 00:30:15.581: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 19 00:30:15.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8305-crds.spec' May 19 00:30:15.839: INFO: stderr: "" May 19 00:30:15.839: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 19 00:30:15.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8305-crds.spec.bars' May 19 00:30:16.063: INFO: stderr: "" May 19 00:30:16.063: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 19 00:30:16.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8305-crds.spec.bars2' May 19 00:30:16.319: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:18.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1644" for this suite. • [SLOW TEST:11.197 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":141,"skipped":2469,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7f20dd13-80fd-4ce1-9583-58d5c20c941e STEP: Creating a pod to test consume configMaps May 19 00:30:18.366: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0" in namespace "projected-6499" to be "Succeeded or Failed" May 19 00:30:18.399: INFO: Pod "pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.350691ms May 19 00:30:20.403: INFO: Pod "pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037570381s May 19 00:30:22.408: INFO: Pod "pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042157517s STEP: Saw pod success May 19 00:30:22.408: INFO: Pod "pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0" satisfied condition "Succeeded or Failed" May 19 00:30:22.411: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0 container projected-configmap-volume-test: STEP: delete the pod May 19 00:30:22.456: INFO: Waiting for pod pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0 to disappear May 19 00:30:22.464: INFO: Pod pod-projected-configmaps-e9467cb4-d331-44b4-bb71-58e76798adb0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:22.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6499" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":142,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:22.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6767 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6767 STEP: creating replication controller externalsvc in namespace services-6767 I0519 00:30:22.844193 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6767, replica count: 2 I0519 00:30:25.894546 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:30:28.894736 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 19 00:30:28.976: INFO: Creating new exec pod May 19 00:30:33.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6767 execpod68flc -- /bin/sh -x -c nslookup nodeport-service' May 19 00:30:33.286: INFO: stderr: "I0519 00:30:33.187894 3331 log.go:172] (0xc0009c7130) (0xc000a346e0) Create stream\nI0519 00:30:33.187945 3331 log.go:172] (0xc0009c7130) (0xc000a346e0) Stream added, broadcasting: 1\nI0519 00:30:33.193024 3331 log.go:172] (0xc0009c7130) Reply frame received for 1\nI0519 00:30:33.193055 3331 log.go:172] (0xc0009c7130) (0xc000526280) Create stream\nI0519 00:30:33.193063 3331 log.go:172] (0xc0009c7130) (0xc000526280) Stream added, broadcasting: 3\nI0519 00:30:33.194258 3331 log.go:172] (0xc0009c7130) Reply frame received for 3\nI0519 00:30:33.194306 3331 log.go:172] (0xc0009c7130) (0xc000368780) Create stream\nI0519 00:30:33.194329 3331 log.go:172] (0xc0009c7130) (0xc000368780) Stream added, broadcasting: 5\nI0519 00:30:33.195176 3331 log.go:172] (0xc0009c7130) Reply frame received for 5\nI0519 00:30:33.272000 3331 log.go:172] (0xc0009c7130) Data frame received for 5\nI0519 00:30:33.272033 3331 log.go:172] (0xc000368780) (5) Data frame handling\nI0519 00:30:33.272048 3331 log.go:172] (0xc000368780) (5) Data frame sent\n+ nslookup nodeport-service\nI0519 00:30:33.278701 3331 log.go:172] (0xc0009c7130) Data frame received for 3\nI0519 00:30:33.278718 3331 log.go:172] (0xc000526280) (3) Data frame handling\nI0519 00:30:33.278739 3331 log.go:172] (0xc000526280) (3) Data frame sent\nI0519 00:30:33.279516 3331 log.go:172] (0xc0009c7130) Data frame received for 3\nI0519 00:30:33.279546 3331 log.go:172] (0xc000526280) (3) Data frame handling\nI0519 00:30:33.279568 3331 log.go:172] (0xc000526280) (3) Data frame sent\nI0519 00:30:33.280118 3331 log.go:172] (0xc0009c7130) Data frame received for 3\nI0519 00:30:33.280140 3331 log.go:172] (0xc000526280) (3) Data frame handling\nI0519 00:30:33.280161 3331 log.go:172] (0xc0009c7130) Data frame received for 5\nI0519 00:30:33.280173 3331 log.go:172] (0xc000368780) (5) Data frame handling\nI0519 00:30:33.281806 3331 log.go:172] (0xc0009c7130) Data frame received for 1\nI0519 00:30:33.281827 3331 log.go:172] (0xc000a346e0) (1) Data frame handling\nI0519 00:30:33.281839 3331 log.go:172] (0xc000a346e0) (1) Data frame sent\nI0519 00:30:33.281857 3331 log.go:172] (0xc0009c7130) (0xc000a346e0) Stream removed, broadcasting: 1\nI0519 00:30:33.281885 3331 log.go:172] (0xc0009c7130) Go away received\nI0519 00:30:33.282145 3331 log.go:172] (0xc0009c7130) (0xc000a346e0) Stream removed, broadcasting: 1\nI0519 00:30:33.282172 3331 log.go:172] (0xc0009c7130) (0xc000526280) Stream removed, broadcasting: 3\nI0519 00:30:33.282187 3331 log.go:172] (0xc0009c7130) (0xc000368780) Stream removed, broadcasting: 5\n" May 19 00:30:33.286: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6767.svc.cluster.local\tcanonical name = externalsvc.services-6767.svc.cluster.local.\nName:\texternalsvc.services-6767.svc.cluster.local\nAddress: 10.109.175.94\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6767, will wait for the garbage collector to delete the pods May 19 00:30:33.349: INFO: Deleting ReplicationController externalsvc took: 5.567266ms May 19 00:30:33.649: INFO: Terminating ReplicationController externalsvc pods took: 300.356194ms May 19 00:30:45.461: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:45.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6767" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.031 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":143,"skipped":2488,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:45.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:49.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4385" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2493,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:49.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-545a1478-2d15-44ba-8fcd-6983d60ef314 STEP: Creating a pod to test consume secrets May 19 00:30:50.002: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906" in namespace "projected-2595" to be "Succeeded or Failed" May 19 00:30:50.144: INFO: Pod "pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906": Phase="Pending", Reason="", readiness=false. Elapsed: 142.321282ms May 19 00:30:52.168: INFO: Pod "pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1660522s May 19 00:30:54.172: INFO: Pod "pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169879489s STEP: Saw pod success May 19 00:30:54.172: INFO: Pod "pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906" satisfied condition "Succeeded or Failed" May 19 00:30:54.174: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906 container projected-secret-volume-test: STEP: delete the pod May 19 00:30:54.206: INFO: Waiting for pod pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906 to disappear May 19 00:30:54.240: INFO: Pod pod-projected-secrets-40339890-b880-4613-b50d-f4e2b2d43906 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:30:54.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2595" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":145,"skipped":2502,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:30:54.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:30:54.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319" in namespace "projected-7558" to be "Succeeded or Failed" May 19 00:30:54.390: INFO: Pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809645ms May 19 00:30:56.598: INFO: Pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212341402s May 19 00:30:58.603: INFO: Pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319": Phase="Running", Reason="", readiness=true. Elapsed: 4.216975884s May 19 00:31:00.607: INFO: Pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.220592676s STEP: Saw pod success May 19 00:31:00.607: INFO: Pod "downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319" satisfied condition "Succeeded or Failed" May 19 00:31:00.609: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319 container client-container: STEP: delete the pod May 19 00:31:00.639: INFO: Waiting for pod downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319 to disappear May 19 00:31:00.647: INFO: Pod downwardapi-volume-730d4b31-224d-4110-8e0a-d218b8482319 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:00.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7558" for this suite. • [SLOW TEST:6.406 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2521,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:00.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 19 00:31:00.752: INFO: Waiting up to 5m0s for pod "var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75" in namespace "var-expansion-4953" to be "Succeeded or Failed" May 19 00:31:00.765: INFO: Pod "var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75": Phase="Pending", Reason="", readiness=false. Elapsed: 13.134115ms May 19 00:31:02.771: INFO: Pod "var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018813548s May 19 00:31:04.775: INFO: Pod "var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022590397s STEP: Saw pod success May 19 00:31:04.775: INFO: Pod "var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75" satisfied condition "Succeeded or Failed" May 19 00:31:04.778: INFO: Trying to get logs from node latest-worker pod var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75 container dapi-container: STEP: delete the pod May 19 00:31:04.892: INFO: Waiting for pod var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75 to disappear May 19 00:31:04.910: INFO: Pod var-expansion-a17b323a-e1ed-4bc1-910e-5fb24e5ffd75 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:04.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4953" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:04.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-157ffa83-3239-4d88-99f7-4f1cbbda3f26 STEP: Creating a pod to test consume secrets May 19 00:31:05.033: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1" in namespace "projected-1419" to be "Succeeded or Failed" May 19 00:31:05.036: INFO: Pod "pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.994004ms May 19 00:31:07.053: INFO: Pod "pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019913411s May 19 00:31:09.058: INFO: Pod "pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024682903s STEP: Saw pod success May 19 00:31:09.058: INFO: Pod "pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1" satisfied condition "Succeeded or Failed" May 19 00:31:09.061: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1 container projected-secret-volume-test: STEP: delete the pod May 19 00:31:09.091: INFO: Waiting for pod pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1 to disappear May 19 00:31:09.096: INFO: Pod pod-projected-secrets-a5b8c9f1-49b7-4eec-a9c5-5b77ddfb11c1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:09.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1419" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:09.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:20.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8379" for this suite. • [SLOW TEST:11.236 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":149,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:20.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 19 00:31:20.448: INFO: Waiting up to 5m0s for pod "pod-680770b3-004b-4c23-8aca-acba5f104f0b" in namespace "emptydir-7336" to be "Succeeded or Failed" May 19 00:31:20.452: INFO: Pod "pod-680770b3-004b-4c23-8aca-acba5f104f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.457002ms May 19 00:31:22.456: INFO: Pod "pod-680770b3-004b-4c23-8aca-acba5f104f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007904217s May 19 00:31:24.461: INFO: Pod "pod-680770b3-004b-4c23-8aca-acba5f104f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012482319s STEP: Saw pod success May 19 00:31:24.461: INFO: Pod "pod-680770b3-004b-4c23-8aca-acba5f104f0b" satisfied condition "Succeeded or Failed" May 19 00:31:24.465: INFO: Trying to get logs from node latest-worker pod pod-680770b3-004b-4c23-8aca-acba5f104f0b container test-container: STEP: delete the pod May 19 00:31:24.521: INFO: Waiting for pod pod-680770b3-004b-4c23-8aca-acba5f104f0b to disappear May 19 00:31:24.524: INFO: Pod pod-680770b3-004b-4c23-8aca-acba5f104f0b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:24.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7336" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2645,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:24.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-d5707955-89ed-4fbd-a7ec-1d4c889316ce STEP: Creating a pod to test consume configMaps May 19 00:31:24.686: INFO: Waiting up to 5m0s for pod "pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11" in namespace "configmap-2576" to be "Succeeded or Failed" May 19 00:31:24.690: INFO: Pod "pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820192ms May 19 00:31:26.766: INFO: Pod "pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080511573s May 19 00:31:28.771: INFO: Pod "pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084672134s STEP: Saw pod success May 19 00:31:28.771: INFO: Pod "pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11" satisfied condition "Succeeded or Failed" May 19 00:31:28.773: INFO: Trying to get logs from node latest-worker pod pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11 container configmap-volume-test: STEP: delete the pod May 19 00:31:28.805: INFO: Waiting for pod pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11 to disappear May 19 00:31:28.815: INFO: Pod pod-configmaps-58f3b034-2926-4c65-b19c-00f2c03ece11 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:28.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2576" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:28.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0519 00:31:41.794406 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 00:31:41.794: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:41.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-363" for this suite. • [SLOW TEST:12.861 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":152,"skipped":2677,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:41.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:31:55.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9524" for this suite. • [SLOW TEST:13.547 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":153,"skipped":2681,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:31:55.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1309 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 00:31:55.479: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 00:31:55.653: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 00:31:57.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 00:31:59.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 00:32:01.657: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:32:03.657: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:32:05.657: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:32:07.657: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:32:09.658: INFO: The status of Pod netserver-0 is Running (Ready = true) May 19 00:32:09.664: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 00:32:11.668: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 19 00:32:17.701: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.171 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1309 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:32:17.701: INFO: >>> kubeConfig: /root/.kube/config I0519 00:32:17.730375 7 log.go:172] (0xc002c51340) (0xc002b074a0) Create stream I0519 00:32:17.730401 7 log.go:172] (0xc002c51340) (0xc002b074a0) Stream added, broadcasting: 1 I0519 00:32:17.732288 7 log.go:172] (0xc002c51340) Reply frame received for 1 I0519 00:32:17.732345 7 log.go:172] (0xc002c51340) (0xc002ba0500) Create stream I0519 00:32:17.732370 7 log.go:172] (0xc002c51340) (0xc002ba0500) Stream added, broadcasting: 3 I0519 00:32:17.733659 7 log.go:172] (0xc002c51340) Reply frame received for 3 I0519 00:32:17.733691 7 log.go:172] (0xc002c51340) (0xc001b95220) Create stream I0519 00:32:17.733709 7 log.go:172] (0xc002c51340) (0xc001b95220) Stream added, broadcasting: 5 I0519 00:32:17.734449 7 log.go:172] (0xc002c51340) Reply frame received for 5 I0519 00:32:18.810991 7 log.go:172] (0xc002c51340) Data frame received for 3 I0519 00:32:18.811034 7 log.go:172] (0xc002ba0500) (3) Data frame handling I0519 00:32:18.811066 7 log.go:172] (0xc002ba0500) (3) Data frame sent I0519 00:32:18.811089 7 log.go:172] (0xc002c51340) Data frame received for 3 I0519 00:32:18.811100 7 log.go:172] (0xc002ba0500) (3) Data frame handling I0519 00:32:18.811234 7 log.go:172] (0xc002c51340) Data frame received for 5 I0519 00:32:18.811285 7 log.go:172] (0xc001b95220) (5) Data frame handling I0519 00:32:18.814072 7 log.go:172] (0xc002c51340) Data frame received for 1 I0519 00:32:18.814116 7 log.go:172] (0xc002b074a0) (1) Data frame handling I0519 00:32:18.814156 7 log.go:172] (0xc002b074a0) (1) Data frame sent I0519 00:32:18.814183 7 log.go:172] (0xc002c51340) (0xc002b074a0) Stream removed, broadcasting: 1 I0519 00:32:18.814204 7 log.go:172] (0xc002c51340) Go away received I0519 00:32:18.814339 7 log.go:172] (0xc002c51340) (0xc002b074a0) Stream removed, broadcasting: 1 I0519 00:32:18.814367 7 log.go:172] (0xc002c51340) (0xc002ba0500) Stream removed, broadcasting: 3 I0519 00:32:18.814379 7 log.go:172] (0xc002c51340) (0xc001b95220) Stream removed, broadcasting: 5 May 19 00:32:18.814: INFO: Found all expected endpoints: [netserver-0] May 19 00:32:18.818: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.163 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1309 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:32:18.818: INFO: >>> kubeConfig: /root/.kube/config I0519 00:32:18.847407 7 log.go:172] (0xc0023564d0) (0xc001b95720) Create stream I0519 00:32:18.847436 7 log.go:172] (0xc0023564d0) (0xc001b95720) Stream added, broadcasting: 1 I0519 00:32:18.849973 7 log.go:172] (0xc0023564d0) Reply frame received for 1 I0519 00:32:18.850023 7 log.go:172] (0xc0023564d0) (0xc002ba05a0) Create stream I0519 00:32:18.850040 7 log.go:172] (0xc0023564d0) (0xc002ba05a0) Stream added, broadcasting: 3 I0519 00:32:18.851112 7 log.go:172] (0xc0023564d0) Reply frame received for 3 I0519 00:32:18.851157 7 log.go:172] (0xc0023564d0) (0xc002ba0640) Create stream I0519 00:32:18.851178 7 log.go:172] (0xc0023564d0) (0xc002ba0640) Stream added, broadcasting: 5 I0519 00:32:18.851977 7 log.go:172] (0xc0023564d0) Reply frame received for 5 I0519 00:32:19.956382 7 log.go:172] (0xc0023564d0) Data frame received for 3 I0519 00:32:19.956422 7 log.go:172] (0xc002ba05a0) (3) Data frame handling I0519 00:32:19.956434 7 log.go:172] (0xc002ba05a0) (3) Data frame sent I0519 00:32:19.956442 7 log.go:172] (0xc0023564d0) Data frame received for 3 I0519 00:32:19.956464 7 log.go:172] (0xc0023564d0) Data frame received for 5 I0519 00:32:19.956503 7 log.go:172] (0xc002ba0640) (5) Data frame handling I0519 00:32:19.956532 7 log.go:172] (0xc002ba05a0) (3) Data frame handling I0519 00:32:19.958483 7 log.go:172] (0xc0023564d0) Data frame received for 1 I0519 00:32:19.958509 7 log.go:172] (0xc001b95720) (1) Data frame handling I0519 00:32:19.958520 7 log.go:172] (0xc001b95720) (1) Data frame sent I0519 00:32:19.958537 7 log.go:172] (0xc0023564d0) (0xc001b95720) Stream removed, broadcasting: 1 I0519 00:32:19.958564 7 log.go:172] (0xc0023564d0) Go away received I0519 00:32:19.958749 7 log.go:172] (0xc0023564d0) (0xc001b95720) Stream removed, broadcasting: 1 I0519 00:32:19.958785 7 log.go:172] (0xc0023564d0) (0xc002ba05a0) Stream removed, broadcasting: 3 I0519 00:32:19.958861 7 log.go:172] (0xc0023564d0) (0xc002ba0640) Stream removed, broadcasting: 5 May 19 00:32:19.958: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:19.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1309" for this suite. • [SLOW TEST:24.646 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":154,"skipped":2684,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:19.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 19 00:32:20.053: INFO: Waiting up to 5m0s for pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47" in namespace "var-expansion-1658" to be "Succeeded or Failed" May 19 00:32:20.056: INFO: Pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47": Phase="Pending", Reason="", readiness=false. Elapsed: 3.600158ms May 19 00:32:22.090: INFO: Pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037271615s May 19 00:32:24.094: INFO: Pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47": Phase="Running", Reason="", readiness=true. Elapsed: 4.041863221s May 19 00:32:26.107: INFO: Pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054820904s STEP: Saw pod success May 19 00:32:26.107: INFO: Pod "var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47" satisfied condition "Succeeded or Failed" May 19 00:32:26.110: INFO: Trying to get logs from node latest-worker2 pod var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47 container dapi-container: STEP: delete the pod May 19 00:32:26.629: INFO: Waiting for pod var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47 to disappear May 19 00:32:26.670: INFO: Pod var-expansion-9fd8b6e7-02a0-48f6-8083-a95e70f7de47 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:26.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1658" for this suite. • [SLOW TEST:6.779 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2701,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:26.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 19 00:32:33.214: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5227 PodName:pod-sharedvolume-e53871cc-92fe-4ba2-a2f8-23d71ab627d5 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:32:33.214: INFO: >>> kubeConfig: /root/.kube/config I0519 00:32:33.239602 7 log.go:172] (0xc001fc66e0) (0xc002d40960) Create stream I0519 00:32:33.239650 7 log.go:172] (0xc001fc66e0) (0xc002d40960) Stream added, broadcasting: 1 I0519 00:32:33.241726 7 log.go:172] (0xc001fc66e0) Reply frame received for 1 I0519 00:32:33.241771 7 log.go:172] (0xc001fc66e0) (0xc002ba1540) Create stream I0519 00:32:33.241787 7 log.go:172] (0xc001fc66e0) (0xc002ba1540) Stream added, broadcasting: 3 I0519 00:32:33.242789 7 log.go:172] (0xc001fc66e0) Reply frame received for 3 I0519 00:32:33.242854 7 log.go:172] (0xc001fc66e0) (0xc002ba15e0) Create stream I0519 00:32:33.242870 7 log.go:172] (0xc001fc66e0) (0xc002ba15e0) Stream added, broadcasting: 5 I0519 00:32:33.243772 7 log.go:172] (0xc001fc66e0) Reply frame received for 5 I0519 00:32:33.326829 7 log.go:172] (0xc001fc66e0) Data frame received for 3 I0519 00:32:33.326865 7 log.go:172] (0xc002ba1540) (3) Data frame handling I0519 00:32:33.326881 7 log.go:172] (0xc002ba1540) (3) Data frame sent I0519 00:32:33.326898 7 log.go:172] (0xc001fc66e0) Data frame received for 3 I0519 00:32:33.326909 7 log.go:172] (0xc002ba1540) (3) Data frame handling I0519 00:32:33.326934 7 log.go:172] (0xc001fc66e0) Data frame received for 5 I0519 00:32:33.326945 7 log.go:172] (0xc002ba15e0) (5) Data frame handling I0519 00:32:33.328410 7 log.go:172] (0xc001fc66e0) Data frame received for 1 I0519 00:32:33.328432 7 log.go:172] (0xc002d40960) (1) Data frame handling I0519 00:32:33.328443 7 log.go:172] (0xc002d40960) (1) Data frame sent I0519 00:32:33.328455 7 log.go:172] (0xc001fc66e0) (0xc002d40960) Stream removed, broadcasting: 1 I0519 00:32:33.328512 7 log.go:172] (0xc001fc66e0) Go away received I0519 00:32:33.328555 7 log.go:172] (0xc001fc66e0) (0xc002d40960) Stream removed, broadcasting: 1 I0519 00:32:33.328574 7 log.go:172] (0xc001fc66e0) (0xc002ba1540) Stream removed, broadcasting: 3 I0519 00:32:33.328587 7 log.go:172] (0xc001fc66e0) (0xc002ba15e0) Stream removed, broadcasting: 5 May 19 00:32:33.328: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5227" for this suite. • [SLOW TEST:6.562 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":156,"skipped":2717,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:33.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 19 00:32:33.441: INFO: Waiting up to 5m0s for pod "var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721" in namespace "var-expansion-138" to be "Succeeded or Failed" May 19 00:32:33.447: INFO: Pod "var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721": Phase="Pending", Reason="", readiness=false. Elapsed: 5.743726ms May 19 00:32:35.527: INFO: Pod "var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085647334s May 19 00:32:37.539: INFO: Pod "var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09750077s STEP: Saw pod success May 19 00:32:37.539: INFO: Pod "var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721" satisfied condition "Succeeded or Failed" May 19 00:32:37.542: INFO: Trying to get logs from node latest-worker pod var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721 container dapi-container: STEP: delete the pod May 19 00:32:37.574: INFO: Waiting for pod var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721 to disappear May 19 00:32:37.584: INFO: Pod var-expansion-cbe8b83c-fb69-437f-b445-cdb710063721 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:37.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-138" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2717,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:37.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 00:32:42.274: INFO: Successfully updated pod "annotationupdate485eeb63-4e64-406d-aebf-8eedf2667ef8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:44.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2703" for this suite. • [SLOW TEST:6.720 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2724,"failed":0} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:44.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:32:44.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572" in namespace "downward-api-1904" to be "Succeeded or Failed" May 19 00:32:44.460: INFO: Pod "downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119697ms May 19 00:32:46.464: INFO: Pod "downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005718981s May 19 00:32:48.491: INFO: Pod "downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032988679s STEP: Saw pod success May 19 00:32:48.491: INFO: Pod "downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572" satisfied condition "Succeeded or Failed" May 19 00:32:48.495: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572 container client-container: STEP: delete the pod May 19 00:32:48.543: INFO: Waiting for pod downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572 to disappear May 19 00:32:48.567: INFO: Pod downwardapi-volume-ba6b9cb0-901d-47fd-b2de-1ee34f73d572 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:48.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1904" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2724,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:48.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:32:48.876: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:55.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9980" for this suite. • [SLOW TEST:6.627 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":160,"skipped":2725,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:55.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8a8e8bb7-6875-4153-aeed-14d28a334e25 STEP: Creating a pod to test consume configMaps May 19 00:32:55.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc" in namespace "projected-7227" to be "Succeeded or Failed" May 19 00:32:55.353: INFO: Pod "pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 69.896861ms May 19 00:32:57.357: INFO: Pod "pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073520693s May 19 00:32:59.361: INFO: Pod "pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077859896s STEP: Saw pod success May 19 00:32:59.361: INFO: Pod "pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc" satisfied condition "Succeeded or Failed" May 19 00:32:59.365: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc container projected-configmap-volume-test: STEP: delete the pod May 19 00:32:59.402: INFO: Waiting for pod pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc to disappear May 19 00:32:59.412: INFO: Pod pod-projected-configmaps-aa00a2d1-b645-4832-affe-652d1094afcc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:32:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7227" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:32:59.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:32:59.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e" in namespace "projected-3256" to be "Succeeded or Failed" May 19 00:32:59.502: INFO: Pod "downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.590722ms May 19 00:33:01.506: INFO: Pod "downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007833486s May 19 00:33:03.511: INFO: Pod "downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012339579s STEP: Saw pod success May 19 00:33:03.511: INFO: Pod "downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e" satisfied condition "Succeeded or Failed" May 19 00:33:03.513: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e container client-container: STEP: delete the pod May 19 00:33:03.552: INFO: Waiting for pod downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e to disappear May 19 00:33:03.561: INFO: Pod downwardapi-volume-534c7791-e49a-47e5-a3a2-a3f31ad55b5e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:03.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3256" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":162,"skipped":2744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:03.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:07.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4006" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:07.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 00:33:07.828: INFO: Waiting up to 5m0s for pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9" in namespace "emptydir-2911" to be "Succeeded or Failed" May 19 00:33:07.831: INFO: Pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539321ms May 19 00:33:09.835: INFO: Pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006891049s May 19 00:33:11.839: INFO: Pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.011211029s May 19 00:33:13.841: INFO: Pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013732442s STEP: Saw pod success May 19 00:33:13.842: INFO: Pod "pod-30e6fd78-300a-41a2-964b-35c3918cd9c9" satisfied condition "Succeeded or Failed" May 19 00:33:13.843: INFO: Trying to get logs from node latest-worker pod pod-30e6fd78-300a-41a2-964b-35c3918cd9c9 container test-container: STEP: delete the pod May 19 00:33:13.872: INFO: Waiting for pod pod-30e6fd78-300a-41a2-964b-35c3918cd9c9 to disappear May 19 00:33:13.916: INFO: Pod pod-30e6fd78-300a-41a2-964b-35c3918cd9c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:13.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2911" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:13.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:14.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9733" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":165,"skipped":2857,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:14.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8383 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8383 to expose endpoints map[] May 19 00:33:14.388: INFO: successfully validated that service endpoint-test2 in namespace services-8383 exposes endpoints map[] (34.659484ms elapsed) STEP: Creating pod pod1 in namespace services-8383 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8383 to expose endpoints map[pod1:[80]] May 19 00:33:18.704: INFO: successfully validated that service endpoint-test2 in namespace services-8383 exposes endpoints map[pod1:[80]] (4.293523787s elapsed) STEP: Creating pod pod2 in namespace services-8383 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8383 to expose endpoints map[pod1:[80] pod2:[80]] May 19 00:33:22.802: INFO: successfully validated that service endpoint-test2 in namespace services-8383 exposes endpoints map[pod1:[80] pod2:[80]] (4.094574188s elapsed) STEP: Deleting pod pod1 in namespace services-8383 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8383 to expose endpoints map[pod2:[80]] May 19 00:33:23.897: INFO: successfully validated that service endpoint-test2 in namespace services-8383 exposes endpoints map[pod2:[80]] (1.089840209s elapsed) STEP: Deleting pod pod2 in namespace services-8383 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8383 to expose endpoints map[] May 19 00:33:24.929: INFO: successfully validated that service endpoint-test2 in namespace services-8383 exposes endpoints map[] (1.027409724s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:24.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.794 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":166,"skipped":2859,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:25.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 19 00:33:25.086: INFO: Waiting up to 5m0s for pod "downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4" in namespace "downward-api-5091" to be "Succeeded or Failed" May 19 00:33:25.102: INFO: Pod "downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.657801ms May 19 00:33:27.168: INFO: Pod "downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081916224s May 19 00:33:29.172: INFO: Pod "downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086057647s STEP: Saw pod success May 19 00:33:29.172: INFO: Pod "downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4" satisfied condition "Succeeded or Failed" May 19 00:33:29.176: INFO: Trying to get logs from node latest-worker2 pod downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4 container dapi-container: STEP: delete the pod May 19 00:33:29.253: INFO: Waiting for pod downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4 to disappear May 19 00:33:29.262: INFO: Pod downward-api-7802138b-af14-4c9f-9d82-3e9624437dd4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:29.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5091" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2866,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:29.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b8814e7e-2460-4e42-9a12-a34585276541 STEP: Creating a pod to test consume secrets May 19 00:33:29.423: INFO: Waiting up to 5m0s for pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60" in namespace "secrets-444" to be "Succeeded or Failed" May 19 00:33:29.439: INFO: Pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60": Phase="Pending", Reason="", readiness=false. Elapsed: 15.564783ms May 19 00:33:31.443: INFO: Pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020009256s May 19 00:33:33.448: INFO: Pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60": Phase="Running", Reason="", readiness=true. Elapsed: 4.024451023s May 19 00:33:35.452: INFO: Pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029040518s STEP: Saw pod success May 19 00:33:35.452: INFO: Pod "pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60" satisfied condition "Succeeded or Failed" May 19 00:33:35.456: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60 container secret-volume-test: STEP: delete the pod May 19 00:33:35.503: INFO: Waiting for pod pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60 to disappear May 19 00:33:35.509: INFO: Pod pod-secrets-853b0485-ee56-4398-a5a3-d5be4632ed60 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:35.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-444" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":168,"skipped":2869,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:35.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:33:35.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec" in namespace "downward-api-9375" to be "Succeeded or Failed" May 19 00:33:35.618: INFO: Pod "downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec": Phase="Pending", Reason="", readiness=false. Elapsed: 15.977263ms May 19 00:33:37.623: INFO: Pod "downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020936036s May 19 00:33:39.627: INFO: Pod "downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025556469s STEP: Saw pod success May 19 00:33:39.627: INFO: Pod "downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec" satisfied condition "Succeeded or Failed" May 19 00:33:39.635: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec container client-container: STEP: delete the pod May 19 00:33:39.670: INFO: Waiting for pod downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec to disappear May 19 00:33:39.682: INFO: Pod downwardapi-volume-8680ba49-2c62-4ba3-bb6e-88eed09892ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9375" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":169,"skipped":2884,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:39.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 19 00:33:39.786: INFO: Waiting up to 5m0s for pod "pod-c017da9c-b127-4dd5-8c13-0c4320942e00" in namespace "emptydir-584" to be "Succeeded or Failed" May 19 00:33:39.802: INFO: Pod "pod-c017da9c-b127-4dd5-8c13-0c4320942e00": Phase="Pending", Reason="", readiness=false. Elapsed: 16.148155ms May 19 00:33:41.806: INFO: Pod "pod-c017da9c-b127-4dd5-8c13-0c4320942e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02000372s May 19 00:33:43.811: INFO: Pod "pod-c017da9c-b127-4dd5-8c13-0c4320942e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025086446s STEP: Saw pod success May 19 00:33:43.811: INFO: Pod "pod-c017da9c-b127-4dd5-8c13-0c4320942e00" satisfied condition "Succeeded or Failed" May 19 00:33:43.814: INFO: Trying to get logs from node latest-worker pod pod-c017da9c-b127-4dd5-8c13-0c4320942e00 container test-container: STEP: delete the pod May 19 00:33:43.838: INFO: Waiting for pod pod-c017da9c-b127-4dd5-8c13-0c4320942e00 to disappear May 19 00:33:43.849: INFO: Pod pod-c017da9c-b127-4dd5-8c13-0c4320942e00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:43.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-584" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 19 00:33:43.955: INFO: Waiting up to 5m0s for pod "client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8" in namespace "containers-8479" to be "Succeeded or Failed" May 19 00:33:43.959: INFO: Pod "client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361132ms May 19 00:33:45.963: INFO: Pod "client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007186694s May 19 00:33:47.967: INFO: Pod "client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011161141s STEP: Saw pod success May 19 00:33:47.967: INFO: Pod "client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8" satisfied condition "Succeeded or Failed" May 19 00:33:47.969: INFO: Trying to get logs from node latest-worker2 pod client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8 container test-container: STEP: delete the pod May 19 00:33:48.014: INFO: Waiting for pod client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8 to disappear May 19 00:33:48.019: INFO: Pod client-containers-71dccb04-2647-43e1-84b9-a9c9a1ae31f8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:48.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8479" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:48.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:33:48.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0" in namespace "downward-api-4075" to be "Succeeded or Failed" May 19 00:33:48.163: INFO: Pod "downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.230553ms May 19 00:33:50.168: INFO: Pod "downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023360862s May 19 00:33:52.174: INFO: Pod "downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028576248s STEP: Saw pod success May 19 00:33:52.174: INFO: Pod "downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0" satisfied condition "Succeeded or Failed" May 19 00:33:52.177: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0 container client-container: STEP: delete the pod May 19 00:33:52.193: INFO: Waiting for pod downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0 to disappear May 19 00:33:52.234: INFO: Pod downwardapi-volume-3bd5786b-ceab-4ce0-9a04-2ceedc3fced0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:33:52.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4075" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2978,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:33:52.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:33:52.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:33:55.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:33:57.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445232, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:34:00.097: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:34:00.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4303" for this suite. STEP: Destroying namespace "webhook-4303-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.211 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":173,"skipped":2986,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:34:00.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0e33603c-e331-41dc-814a-f300a9f4105b STEP: Creating a pod to test consume secrets May 19 00:34:00.579: INFO: Waiting up to 5m0s for pod "pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276" in namespace "secrets-2006" to be "Succeeded or Failed" May 19 00:34:00.583: INFO: Pod "pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401494ms May 19 00:34:02.619: INFO: Pod "pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039920247s May 19 00:34:04.623: INFO: Pod "pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044330053s STEP: Saw pod success May 19 00:34:04.624: INFO: Pod "pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276" satisfied condition "Succeeded or Failed" May 19 00:34:04.627: INFO: Trying to get logs from node latest-worker pod pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276 container secret-volume-test: STEP: delete the pod May 19 00:34:04.659: INFO: Waiting for pod pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276 to disappear May 19 00:34:04.672: INFO: Pod pod-secrets-17f03e0f-cc41-40ba-a460-2c46a9e8c276 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:34:04.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2006" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:34:04.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-60f7fe41-a622-4f9c-abde-f574799b5294 STEP: Creating a pod to test consume secrets May 19 00:34:04.786: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991" in namespace "projected-5659" to be "Succeeded or Failed" May 19 00:34:04.800: INFO: Pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991": Phase="Pending", Reason="", readiness=false. Elapsed: 13.925067ms May 19 00:34:06.803: INFO: Pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017151515s May 19 00:34:08.807: INFO: Pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991": Phase="Running", Reason="", readiness=true. Elapsed: 4.021894596s May 19 00:34:10.811: INFO: Pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025687074s STEP: Saw pod success May 19 00:34:10.811: INFO: Pod "pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991" satisfied condition "Succeeded or Failed" May 19 00:34:10.814: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991 container secret-volume-test: STEP: delete the pod May 19 00:34:10.865: INFO: Waiting for pod pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991 to disappear May 19 00:34:10.905: INFO: Pod pod-projected-secrets-7ae8c575-554c-4399-98d9-08fdd414f991 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:34:10.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5659" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":3025,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:34:10.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 19 00:34:11.110: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 19 00:34:11.521: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 19 00:34:13.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:34:15.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445251, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:34:18.432: INFO: Waited 623.46743ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:34:18.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3943" for this suite. • [SLOW TEST:8.246 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":176,"skipped":3036,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:34:19.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:34:19.842: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Pending, waiting for it to be Running (with Ready = true) May 19 00:34:21.846: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Pending, waiting for it to be Running (with Ready = true) May 19 00:34:23.846: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:25.846: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:27.846: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:29.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:31.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:33.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:35.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:37.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:39.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:41.847: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = false) May 19 00:34:43.846: INFO: The status of Pod test-webserver-b8a633b5-8884-4065-8468-9d6a06c801cb is Running (Ready = true) May 19 00:34:43.849: INFO: Container started at 2020-05-19 00:34:22 +0000 UTC, pod became ready at 2020-05-19 00:34:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:34:43.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9893" for this suite. • [SLOW TEST:24.671 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":3042,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:34:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-949 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-949 STEP: Creating statefulset with conflicting port in namespace statefulset-949 STEP: Waiting until pod test-pod will start running in namespace statefulset-949 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-949 May 19 00:34:50.083: INFO: Observed stateful pod in namespace: statefulset-949, name: ss-0, uid: 3e14f300-9d47-471c-8920-30feec354583, status phase: Pending. Waiting for statefulset controller to delete. May 19 00:34:50.388: INFO: Observed stateful pod in namespace: statefulset-949, name: ss-0, uid: 3e14f300-9d47-471c-8920-30feec354583, status phase: Failed. Waiting for statefulset controller to delete. May 19 00:34:50.409: INFO: Observed stateful pod in namespace: statefulset-949, name: ss-0, uid: 3e14f300-9d47-471c-8920-30feec354583, status phase: Failed. Waiting for statefulset controller to delete. May 19 00:34:50.425: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-949 STEP: Removing pod with conflicting port in namespace statefulset-949 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-949 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 00:34:54.581: INFO: Deleting all statefulset in ns statefulset-949 May 19 00:34:54.584: INFO: Scaling statefulset ss to 0 May 19 00:35:14.603: INFO: Waiting for statefulset status.replicas updated to 0 May 19 00:35:14.606: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:35:14.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-949" for this suite. • [SLOW TEST:30.796 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":178,"skipped":3053,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:35:14.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-7222/configmap-test-05145ef1-e6b3-468a-8067-6060ac9b3af8 STEP: Creating a pod to test consume configMaps May 19 00:35:14.744: INFO: Waiting up to 5m0s for pod "pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967" in namespace "configmap-7222" to be "Succeeded or Failed" May 19 00:35:14.747: INFO: Pod "pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967": Phase="Pending", Reason="", readiness=false. Elapsed: 3.126499ms May 19 00:35:16.751: INFO: Pod "pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006470387s May 19 00:35:18.755: INFO: Pod "pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011148679s STEP: Saw pod success May 19 00:35:18.755: INFO: Pod "pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967" satisfied condition "Succeeded or Failed" May 19 00:35:18.759: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967 container env-test: STEP: delete the pod May 19 00:35:18.802: INFO: Waiting for pod pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967 to disappear May 19 00:35:18.834: INFO: Pod pod-configmaps-3acc0903-3580-4884-bdbb-164ad195d967 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:35:18.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7222" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3063,"failed":0} S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:35:18.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:35:18.911: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 19 00:35:18.931: INFO: Pod name sample-pod: Found 0 pods out of 1 May 19 00:35:23.951: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 00:35:23.951: INFO: Creating deployment "test-rolling-update-deployment" May 19 00:35:23.968: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 19 00:35:23.983: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 19 00:35:26.023: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 19 00:35:26.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:35:28.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445327, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445324, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:35:30.030: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 00:35:30.039: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7306 /apis/apps/v1/namespaces/deployment-7306/deployments/test-rolling-update-deployment 48f9ca93-4953-4c45-ac02-4c39ef821e6f 5823729 1 2020-05-19 00:35:23 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-19 00:35:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:35:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029cc608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 00:35:24 +0000 UTC,LastTransitionTime:2020-05-19 00:35:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-19 00:35:28 +0000 UTC,LastTransitionTime:2020-05-19 00:35:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 00:35:30.042: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-7306 /apis/apps/v1/namespaces/deployment-7306/replicasets/test-rolling-update-deployment-df7bb669b 6bd4b055-1771-4373-a670-d4dd725f4484 5823718 1 2020-05-19 00:35:23 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 48f9ca93-4953-4c45-ac02-4c39ef821e6f 0xc00087d0e0 0xc00087d0e1}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:35:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48f9ca93-4953-4c45-ac02-4c39ef821e6f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00087d158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 00:35:30.042: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 19 00:35:30.042: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7306 /apis/apps/v1/namespaces/deployment-7306/replicasets/test-rolling-update-controller cae8f290-424a-40e5-a14c-0ebe5bf8ba99 5823727 2 2020-05-19 00:35:18 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 48f9ca93-4953-4c45-ac02-4c39ef821e6f 0xc00087cfb7 0xc00087cfb8}] [] [{e2e.test Update apps/v1 2020-05-19 00:35:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:35:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48f9ca93-4953-4c45-ac02-4c39ef821e6f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00087d078 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:35:30.045: INFO: Pod "test-rolling-update-deployment-df7bb669b-xxczd" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-xxczd test-rolling-update-deployment-df7bb669b- deployment-7306 /api/v1/namespaces/deployment-7306/pods/test-rolling-update-deployment-df7bb669b-xxczd 71aeee2c-7847-4a03-a815-8a9e418ca1ef 5823717 0 2020-05-19 00:35:24 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 6bd4b055-1771-4373-a670-d4dd725f4484 0xc002bc6180 0xc002bc6181}] [] [{kube-controller-manager Update v1 2020-05-19 00:35:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6bd4b055-1771-4373-a670-d4dd725f4484\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:35:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bll4q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bll4q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bll4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:35:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:35:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:35:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:35:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.177,StartTime:2020-05-19 00:35:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:35:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://044d6855c32fae224fbdf658e5f9859d52284c8c23417980ac361de2393d2aa2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:35:30.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7306" for this suite. • [SLOW TEST:11.210 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":180,"skipped":3064,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:35:30.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-txdrr in namespace proxy-6254 I0519 00:35:30.225554 7 runners.go:190] Created replication controller with name: proxy-service-txdrr, namespace: proxy-6254, replica count: 1 I0519 00:35:31.275915 7 runners.go:190] proxy-service-txdrr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:35:32.276119 7 runners.go:190] proxy-service-txdrr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:35:33.276344 7 runners.go:190] proxy-service-txdrr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 00:35:34.276560 7 runners.go:190] proxy-service-txdrr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0519 00:35:35.276774 7 runners.go:190] proxy-service-txdrr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:35:35.573: INFO: setup took 5.413700778s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 8.922941ms) May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 9.144136ms) May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 9.105689ms) May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 8.950187ms) May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 8.992955ms) May 19 00:35:35.582: INFO: (0) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 9.009379ms) May 19 00:35:35.583: INFO: (0) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 10.398115ms) May 19 00:35:35.584: INFO: (0) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 10.521721ms) May 19 00:35:35.584: INFO: (0) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 11.226067ms) May 19 00:35:35.585: INFO: (0) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 11.523041ms) May 19 00:35:35.585: INFO: (0) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 11.841333ms) May 19 00:35:35.586: INFO: (0) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 6.610192ms) May 19 00:35:35.597: INFO: (1) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 7.070722ms) May 19 00:35:35.597: INFO: (1) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 6.813502ms) May 19 00:35:35.597: INFO: (1) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 7.764269ms) May 19 00:35:35.598: INFO: (1) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 7.944797ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 8.090528ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 8.493074ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 8.629132ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 8.83604ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 8.741135ms) May 19 00:35:35.599: INFO: (1) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 9.062707ms) May 19 00:35:35.715: INFO: (2) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 115.733353ms) May 19 00:35:35.716: INFO: (2) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 115.981619ms) May 19 00:35:35.716: INFO: (2) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 116.043564ms) May 19 00:35:35.716: INFO: (2) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 116.0785ms) May 19 00:35:35.716: INFO: (2) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 115.969823ms) May 19 00:35:35.716: INFO: (2) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: ... (200; 5.175406ms) May 19 00:35:35.727: INFO: (3) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 5.416675ms) May 19 00:35:35.727: INFO: (3) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 5.56223ms) May 19 00:35:35.727: INFO: (3) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 5.589871ms) May 19 00:35:35.728: INFO: (3) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 6.010933ms) May 19 00:35:35.728: INFO: (3) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.528569ms) May 19 00:35:35.728: INFO: (3) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 6.461985ms) May 19 00:35:35.728: INFO: (3) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 6.524366ms) May 19 00:35:35.729: INFO: (3) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.921437ms) May 19 00:35:35.729: INFO: (3) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 7.49104ms) May 19 00:35:35.729: INFO: (3) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 7.62778ms) May 19 00:35:35.729: INFO: (3) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 7.762756ms) May 19 00:35:35.729: INFO: (3) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 7.723714ms) May 19 00:35:35.730: INFO: (3) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 7.878183ms) May 19 00:35:35.730: INFO: (3) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 8.242902ms) May 19 00:35:35.735: INFO: (4) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 5.232397ms) May 19 00:35:35.735: INFO: (4) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 5.367538ms) May 19 00:35:35.735: INFO: (4) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 5.407844ms) May 19 00:35:35.735: INFO: (4) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 5.358494ms) May 19 00:35:35.735: INFO: (4) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 5.470766ms) May 19 00:35:35.736: INFO: (4) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.296491ms) May 19 00:35:35.736: INFO: (4) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 6.274285ms) May 19 00:35:35.736: INFO: (4) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 6.362199ms) May 19 00:35:35.736: INFO: (4) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.424282ms) May 19 00:35:35.736: INFO: (4) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 6.494189ms) May 19 00:35:35.737: INFO: (4) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 6.478314ms) May 19 00:35:35.739: INFO: (5) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 2.355788ms) May 19 00:35:35.741: INFO: (5) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 3.999768ms) May 19 00:35:35.741: INFO: (5) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 4.038457ms) May 19 00:35:35.741: INFO: (5) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 4.102699ms) May 19 00:35:35.741: INFO: (5) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 4.01774ms) May 19 00:35:35.741: INFO: (5) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 3.952733ms) May 19 00:35:35.753: INFO: (5) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 16.651056ms) May 19 00:35:35.754: INFO: (5) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 16.642592ms) May 19 00:35:35.754: INFO: (5) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 16.983255ms) May 19 00:35:35.754: INFO: (5) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 17.107015ms) May 19 00:35:35.756: INFO: (5) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 18.679212ms) May 19 00:35:35.756: INFO: (5) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 18.710875ms) May 19 00:35:35.756: INFO: (5) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 18.762834ms) May 19 00:35:35.756: INFO: (5) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 18.78523ms) May 19 00:35:35.765: INFO: (6) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 8.765779ms) May 19 00:35:35.767: INFO: (6) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 11.363613ms) May 19 00:35:35.767: INFO: (6) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 11.420265ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 14.179325ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 14.454961ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 14.461257ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 13.983924ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: ... (200; 14.031966ms) May 19 00:35:35.770: INFO: (6) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 14.184312ms) May 19 00:35:35.777: INFO: (6) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 21.240611ms) May 19 00:35:35.777: INFO: (6) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 21.609766ms) May 19 00:35:35.777: INFO: (6) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 21.52977ms) May 19 00:35:35.778: INFO: (6) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 21.891078ms) May 19 00:35:35.778: INFO: (6) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 21.748371ms) May 19 00:35:35.778: INFO: (6) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 22.233363ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 13.575813ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 14.140344ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 14.12974ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 14.175913ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 14.188519ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 14.27198ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 14.150541ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 14.182734ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 14.232819ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 14.161335ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 14.236285ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 14.239761ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 14.208025ms) May 19 00:35:35.792: INFO: (7) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 3.306732ms) May 19 00:35:35.798: INFO: (8) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 4.48897ms) May 19 00:35:35.798: INFO: (8) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 4.449273ms) May 19 00:35:35.798: INFO: (8) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 4.61584ms) May 19 00:35:35.798: INFO: (8) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 4.652149ms) May 19 00:35:35.799: INFO: (8) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 5.306844ms) May 19 00:35:35.799: INFO: (8) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 5.354679ms) May 19 00:35:35.799: INFO: (8) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: ... (200; 106.272362ms) May 19 00:35:35.908: INFO: (9) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 106.258595ms) May 19 00:35:35.908: INFO: (9) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 106.281256ms) May 19 00:35:35.908: INFO: (9) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 108.146595ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 109.121057ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 109.308163ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 109.409794ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 109.343401ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 109.458574ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 109.451549ms) May 19 00:35:35.911: INFO: (9) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 109.724399ms) May 19 00:35:35.912: INFO: (9) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 109.905925ms) May 19 00:35:35.912: INFO: (9) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 109.807148ms) May 19 00:35:35.971: INFO: (10) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 59.189332ms) May 19 00:35:35.971: INFO: (10) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 59.401726ms) May 19 00:35:35.971: INFO: (10) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 59.455671ms) May 19 00:35:35.971: INFO: (10) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 59.583828ms) May 19 00:35:35.971: INFO: (10) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: ... (200; 60.370106ms) May 19 00:35:35.972: INFO: (10) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 60.358251ms) May 19 00:35:35.973: INFO: (10) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 61.096142ms) May 19 00:35:35.973: INFO: (10) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 61.597762ms) May 19 00:35:35.974: INFO: (10) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 62.564656ms) May 19 00:35:35.974: INFO: (10) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 62.5239ms) May 19 00:35:35.974: INFO: (10) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 62.642283ms) May 19 00:35:35.975: INFO: (10) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 62.998861ms) May 19 00:35:35.975: INFO: (10) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 63.34475ms) May 19 00:35:35.975: INFO: (10) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 63.326356ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 30.976585ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 30.895475ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 31.103173ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 30.938016ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 31.156983ms) May 19 00:35:36.006: INFO: (11) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: ... (200; 31.889463ms) May 19 00:35:36.007: INFO: (11) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 31.929177ms) May 19 00:35:36.008: INFO: (11) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 32.171499ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 135.865713ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 135.896998ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 135.837279ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 135.933171ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 135.983371ms) May 19 00:35:36.111: INFO: (11) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 135.941731ms) May 19 00:35:36.127: INFO: (12) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 15.771284ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 16.330615ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 16.928689ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 16.819053ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 16.872908ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 16.898564ms) May 19 00:35:36.128: INFO: (12) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 16.935141ms) May 19 00:35:36.129: INFO: (12) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 17.786168ms) May 19 00:35:36.129: INFO: (12) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 17.817291ms) May 19 00:35:36.129: INFO: (12) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 17.910804ms) May 19 00:35:36.129: INFO: (12) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 17.932541ms) May 19 00:35:36.130: INFO: (12) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 18.218087ms) May 19 00:35:36.130: INFO: (12) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 18.387639ms) May 19 00:35:36.130: INFO: (12) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 18.325875ms) May 19 00:35:36.130: INFO: (12) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 18.346052ms) May 19 00:35:36.130: INFO: (12) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test<... (200; 29.033875ms) May 19 00:35:36.160: INFO: (13) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 29.255645ms) May 19 00:35:36.160: INFO: (13) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 29.368539ms) May 19 00:35:36.160: INFO: (13) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 29.482906ms) May 19 00:35:36.160: INFO: (13) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 29.515197ms) May 19 00:35:36.160: INFO: (13) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 31.742027ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 32.680283ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 32.770568ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 32.692557ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 32.607669ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 32.864392ms) May 19 00:35:36.163: INFO: (13) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 33.136873ms) May 19 00:35:36.165: INFO: (13) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 34.947142ms) May 19 00:35:36.166: INFO: (13) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 35.679489ms) May 19 00:35:36.185: INFO: (14) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 19.00316ms) May 19 00:35:36.185: INFO: (14) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 18.985121ms) May 19 00:35:36.185: INFO: (14) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 19.089575ms) May 19 00:35:36.185: INFO: (14) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 19.262803ms) May 19 00:35:36.186: INFO: (14) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 21.291897ms) May 19 00:35:36.187: INFO: (14) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 21.463442ms) May 19 00:35:36.187: INFO: (14) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 21.529504ms) May 19 00:35:36.187: INFO: (14) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 21.419552ms) May 19 00:35:36.187: INFO: (14) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 21.582965ms) May 19 00:35:36.188: INFO: (14) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 21.634546ms) May 19 00:35:36.188: INFO: (14) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 21.631058ms) May 19 00:35:36.208: INFO: (14) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 42.447728ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 23.178985ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 23.220405ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 23.299461ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 23.381891ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 23.299349ms) May 19 00:35:36.232: INFO: (15) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 24.735704ms) May 19 00:35:36.233: INFO: (15) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 24.921336ms) May 19 00:35:36.244: INFO: (15) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 35.588576ms) May 19 00:35:36.244: INFO: (15) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 35.607288ms) May 19 00:35:36.244: INFO: (15) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 35.945668ms) May 19 00:35:36.244: INFO: (15) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 35.97021ms) May 19 00:35:36.245: INFO: (15) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 36.038388ms) May 19 00:35:36.245: INFO: (15) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 36.011083ms) May 19 00:35:36.260: INFO: (16) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 15.064727ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 16.161023ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 16.276565ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 16.300639ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 16.381909ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 16.386566ms) May 19 00:35:36.261: INFO: (16) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 16.571188ms) May 19 00:35:36.262: INFO: (16) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 16.588153ms) May 19 00:35:36.262: INFO: (16) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 16.555788ms) May 19 00:35:36.262: INFO: (16) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 16.564715ms) May 19 00:35:36.262: INFO: (16) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 17.021942ms) May 19 00:35:36.262: INFO: (16) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 17.569513ms) May 19 00:35:36.263: INFO: (16) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 17.904524ms) May 19 00:35:36.263: INFO: (16) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 18.116386ms) May 19 00:35:36.263: INFO: (16) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 18.285815ms) May 19 00:35:36.271: INFO: (17) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 7.703563ms) May 19 00:35:36.271: INFO: (17) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 7.781933ms) May 19 00:35:36.273: INFO: (17) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 9.954673ms) May 19 00:35:36.275: INFO: (17) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 11.734641ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 11.613127ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 12.037455ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 12.594179ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 12.148582ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 12.542247ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 12.450772ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 12.596244ms) May 19 00:35:36.276: INFO: (17) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 12.691042ms) May 19 00:35:36.277: INFO: (17) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 12.770027ms) May 19 00:35:36.277: INFO: (17) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 13.575411ms) May 19 00:35:36.277: INFO: (17) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 12.922194ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname1/proxy/: foo (200; 5.806614ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname2/proxy/: bar (200; 5.958669ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname2/proxy/: tls qux (200; 5.859143ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/proxy-service-txdrr:portname2/proxy/: bar (200; 5.927661ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 6.07607ms) May 19 00:35:36.283: INFO: (18) /api/v1/namespaces/proxy-6254/services/http:proxy-service-txdrr:portname1/proxy/: foo (200; 6.202577ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.22492ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 6.229234ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:462/proxy/: tls qux (200; 6.250304ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:162/proxy/: bar (200; 6.772904ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:460/proxy/: tls baz (200; 6.801399ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 6.814517ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 6.911151ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:160/proxy/: foo (200; 6.953717ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5/proxy/: test (200; 7.106478ms) May 19 00:35:36.284: INFO: (18) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: test (200; 6.299458ms) May 19 00:35:36.291: INFO: (19) /api/v1/namespaces/proxy-6254/pods/http:proxy-service-txdrr-rpvw5:1080/proxy/: ... (200; 6.279025ms) May 19 00:35:36.291: INFO: (19) /api/v1/namespaces/proxy-6254/pods/proxy-service-txdrr-rpvw5:1080/proxy/: test<... (200; 6.406209ms) May 19 00:35:36.291: INFO: (19) /api/v1/namespaces/proxy-6254/services/https:proxy-service-txdrr:tlsportname1/proxy/: tls baz (200; 6.306149ms) May 19 00:35:36.291: INFO: (19) /api/v1/namespaces/proxy-6254/pods/https:proxy-service-txdrr-rpvw5:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:35:49.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3794" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":182,"skipped":3090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:35:49.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 19 00:35:57.232: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 00:35:57.296: INFO: Pod pod-with-prestop-exec-hook still exists May 19 00:35:59.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 00:35:59.300: INFO: Pod pod-with-prestop-exec-hook still exists May 19 00:36:01.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 00:36:01.300: INFO: Pod pod-with-prestop-exec-hook still exists May 19 00:36:03.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 00:36:03.301: INFO: Pod pod-with-prestop-exec-hook still exists May 19 00:36:05.296: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 19 00:36:05.301: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:05.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1205" for this suite. • [SLOW TEST:16.216 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":183,"skipped":3114,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:05.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-0ab6b517-e885-4d8c-b9aa-1c1fad44b6bc [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:05.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4387" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":184,"skipped":3132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:05.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:36:05.511: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 19 00:36:05.520: INFO: Number of nodes with available pods: 0 May 19 00:36:05.520: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 19 00:36:05.617: INFO: Number of nodes with available pods: 0 May 19 00:36:05.617: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:06.621: INFO: Number of nodes with available pods: 0 May 19 00:36:06.621: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:07.717: INFO: Number of nodes with available pods: 0 May 19 00:36:07.717: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:08.637: INFO: Number of nodes with available pods: 0 May 19 00:36:08.637: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:09.636: INFO: Number of nodes with available pods: 1 May 19 00:36:09.636: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 19 00:36:09.671: INFO: Number of nodes with available pods: 1 May 19 00:36:09.671: INFO: Number of running nodes: 0, number of available pods: 1 May 19 00:36:10.691: INFO: Number of nodes with available pods: 0 May 19 00:36:10.691: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 19 00:36:10.745: INFO: Number of nodes with available pods: 0 May 19 00:36:10.745: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:11.749: INFO: Number of nodes with available pods: 0 May 19 00:36:11.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:12.749: INFO: Number of nodes with available pods: 0 May 19 00:36:12.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:13.749: INFO: Number of nodes with available pods: 0 May 19 00:36:13.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:14.762: INFO: Number of nodes with available pods: 0 May 19 00:36:14.762: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:15.750: INFO: Number of nodes with available pods: 0 May 19 00:36:15.750: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:16.748: INFO: Number of nodes with available pods: 0 May 19 00:36:16.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:17.768: INFO: Number of nodes with available pods: 0 May 19 00:36:17.768: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:18.757: INFO: Number of nodes with available pods: 0 May 19 00:36:18.757: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:19.748: INFO: Number of nodes with available pods: 0 May 19 00:36:19.748: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:20.758: INFO: Number of nodes with available pods: 0 May 19 00:36:20.758: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:21.749: INFO: Number of nodes with available pods: 0 May 19 00:36:21.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:22.761: INFO: Number of nodes with available pods: 0 May 19 00:36:22.761: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:23.749: INFO: Number of nodes with available pods: 0 May 19 00:36:23.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:24.749: INFO: Number of nodes with available pods: 0 May 19 00:36:24.749: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:25.750: INFO: Number of nodes with available pods: 0 May 19 00:36:25.750: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:26.764: INFO: Number of nodes with available pods: 0 May 19 00:36:26.764: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:27.750: INFO: Number of nodes with available pods: 0 May 19 00:36:27.750: INFO: Node latest-worker is running more than one daemon pod May 19 00:36:28.750: INFO: Number of nodes with available pods: 1 May 19 00:36:28.750: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5225, will wait for the garbage collector to delete the pods May 19 00:36:28.814: INFO: Deleting DaemonSet.extensions daemon-set took: 6.823628ms May 19 00:36:29.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.283112ms May 19 00:36:34.917: INFO: Number of nodes with available pods: 0 May 19 00:36:34.918: INFO: Number of running nodes: 0, number of available pods: 0 May 19 00:36:34.920: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5225/daemonsets","resourceVersion":"5824132"},"items":null} May 19 00:36:34.922: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5225/pods","resourceVersion":"5824132"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:34.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5225" for this suite. • [SLOW TEST:29.604 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":185,"skipped":3180,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:34.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:36:35.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0" in namespace "downward-api-6444" to be "Succeeded or Failed" May 19 00:36:35.116: INFO: Pod "downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.201132ms May 19 00:36:37.323: INFO: Pod "downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242423671s May 19 00:36:39.326: INFO: Pod "downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.24561488s STEP: Saw pod success May 19 00:36:39.326: INFO: Pod "downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0" satisfied condition "Succeeded or Failed" May 19 00:36:39.328: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0 container client-container: STEP: delete the pod May 19 00:36:39.399: INFO: Waiting for pod downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0 to disappear May 19 00:36:39.407: INFO: Pod downwardapi-volume-16f2a3cd-a9bd-48cb-9975-af1be7879ab0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:39.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6444" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":186,"skipped":3192,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:39.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 19 00:36:39.499: INFO: Waiting up to 5m0s for pod "pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef" in namespace "emptydir-6350" to be "Succeeded or Failed" May 19 00:36:39.518: INFO: Pod "pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef": Phase="Pending", Reason="", readiness=false. Elapsed: 19.018567ms May 19 00:36:41.522: INFO: Pod "pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023399599s May 19 00:36:43.526: INFO: Pod "pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027763278s STEP: Saw pod success May 19 00:36:43.526: INFO: Pod "pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef" satisfied condition "Succeeded or Failed" May 19 00:36:43.530: INFO: Trying to get logs from node latest-worker pod pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef container test-container: STEP: delete the pod May 19 00:36:43.704: INFO: Waiting for pod pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef to disappear May 19 00:36:43.763: INFO: Pod pod-ac9c83f7-0cab-45df-9b57-a9e99f3d53ef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6350" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3201,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:43.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c9f5414c-4b1d-4c2c-8e12-39249e3f9903 STEP: Creating a pod to test consume secrets May 19 00:36:44.024: INFO: Waiting up to 5m0s for pod "pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87" in namespace "secrets-1352" to be "Succeeded or Failed" May 19 00:36:44.039: INFO: Pod "pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87": Phase="Pending", Reason="", readiness=false. Elapsed: 15.467762ms May 19 00:36:46.044: INFO: Pod "pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019930319s May 19 00:36:48.048: INFO: Pod "pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024236292s STEP: Saw pod success May 19 00:36:48.048: INFO: Pod "pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87" satisfied condition "Succeeded or Failed" May 19 00:36:48.051: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87 container secret-volume-test: STEP: delete the pod May 19 00:36:48.086: INFO: Waiting for pod pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87 to disappear May 19 00:36:48.089: INFO: Pod pod-secrets-f6018f4d-26e6-4742-9be4-746a8c222c87 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:48.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1352" for this suite. STEP: Destroying namespace "secret-namespace-1469" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3213,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:48.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:36:48.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5" in namespace "projected-9958" to be "Succeeded or Failed" May 19 00:36:48.470: INFO: Pod "downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.692187ms May 19 00:36:50.553: INFO: Pod "downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087452863s May 19 00:36:52.558: INFO: Pod "downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091916783s STEP: Saw pod success May 19 00:36:52.558: INFO: Pod "downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5" satisfied condition "Succeeded or Failed" May 19 00:36:52.562: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5 container client-container: STEP: delete the pod May 19 00:36:52.613: INFO: Waiting for pod downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5 to disappear May 19 00:36:52.618: INFO: Pod downwardapi-volume-087b7310-427c-4c3f-9f0f-a78f898ecae5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:36:52.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9958" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":189,"skipped":3217,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:36:52.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:38:52.700: INFO: Deleting pod "var-expansion-0fda055b-e110-440f-911e-91fad116040c" in namespace "var-expansion-3206" May 19 00:38:52.705: INFO: Wait up to 5m0s for pod "var-expansion-0fda055b-e110-440f-911e-91fad116040c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:06.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3206" for this suite. • [SLOW TEST:134.137 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":190,"skipped":3222,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:06.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:39:07.471: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:39:09.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445547, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445547, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445547, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445547, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:39:12.516: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:12.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3240" for this suite. STEP: Destroying namespace "webhook-3240-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.089 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":191,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:12.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:39:13.992: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:39:16.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445553, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445553, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445554, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725445553, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:39:19.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:19.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8873" for this suite. STEP: Destroying namespace "webhook-8873-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.898 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":192,"skipped":3252,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:19.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 19 00:39:19.841: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:27.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-679" for this suite. • [SLOW TEST:7.974 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3256,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:27.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9127 STEP: creating service affinity-nodeport in namespace services-9127 STEP: creating replication controller affinity-nodeport in namespace services-9127 I0519 00:39:27.867545 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9127, replica count: 3 I0519 00:39:30.917933 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:39:33.918155 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:39:33.928: INFO: Creating new exec pod May 19 00:39:38.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9127 execpod-affinityprh2p -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 19 00:39:39.204: INFO: stderr: "I0519 00:39:39.088365 3351 log.go:172] (0xc0009cd1e0) (0xc00084ee60) Create stream\nI0519 00:39:39.088420 3351 log.go:172] (0xc0009cd1e0) (0xc00084ee60) Stream added, broadcasting: 1\nI0519 00:39:39.091528 3351 log.go:172] (0xc0009cd1e0) Reply frame received for 1\nI0519 00:39:39.091574 3351 log.go:172] (0xc0009cd1e0) (0xc000a48460) Create stream\nI0519 00:39:39.091613 3351 log.go:172] (0xc0009cd1e0) (0xc000a48460) Stream added, broadcasting: 3\nI0519 00:39:39.092867 3351 log.go:172] (0xc0009cd1e0) Reply frame received for 3\nI0519 00:39:39.092941 3351 log.go:172] (0xc0009cd1e0) (0xc000a5a5a0) Create stream\nI0519 00:39:39.092964 3351 log.go:172] (0xc0009cd1e0) (0xc000a5a5a0) Stream added, broadcasting: 5\nI0519 00:39:39.094335 3351 log.go:172] (0xc0009cd1e0) Reply frame received for 5\nI0519 00:39:39.187360 3351 log.go:172] (0xc0009cd1e0) Data frame received for 5\nI0519 00:39:39.187393 3351 log.go:172] (0xc000a5a5a0) (5) Data frame handling\nI0519 00:39:39.187410 3351 log.go:172] (0xc000a5a5a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0519 00:39:39.198523 3351 log.go:172] (0xc0009cd1e0) Data frame received for 5\nI0519 00:39:39.198543 3351 log.go:172] (0xc000a5a5a0) (5) Data frame handling\nI0519 00:39:39.198563 3351 log.go:172] (0xc000a5a5a0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0519 00:39:39.198602 3351 log.go:172] (0xc0009cd1e0) Data frame received for 3\nI0519 00:39:39.198613 3351 log.go:172] (0xc000a48460) (3) Data frame handling\nI0519 00:39:39.198809 3351 log.go:172] (0xc0009cd1e0) Data frame received for 5\nI0519 00:39:39.198820 3351 log.go:172] (0xc000a5a5a0) (5) Data frame handling\nI0519 00:39:39.200772 3351 log.go:172] (0xc0009cd1e0) Data frame received for 1\nI0519 00:39:39.200852 3351 log.go:172] (0xc00084ee60) (1) Data frame handling\nI0519 00:39:39.200872 3351 log.go:172] (0xc00084ee60) (1) Data frame sent\nI0519 00:39:39.200882 3351 log.go:172] (0xc0009cd1e0) (0xc00084ee60) Stream removed, broadcasting: 1\nI0519 00:39:39.200893 3351 log.go:172] (0xc0009cd1e0) Go away received\nI0519 00:39:39.201238 3351 log.go:172] (0xc0009cd1e0) (0xc00084ee60) Stream removed, broadcasting: 1\nI0519 00:39:39.201258 3351 log.go:172] (0xc0009cd1e0) (0xc000a48460) Stream removed, broadcasting: 3\nI0519 00:39:39.201266 3351 log.go:172] (0xc0009cd1e0) (0xc000a5a5a0) Stream removed, broadcasting: 5\n" May 19 00:39:39.204: INFO: stdout: "" May 19 00:39:39.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9127 execpod-affinityprh2p -- /bin/sh -x -c nc -zv -t -w 2 10.106.175.125 80' May 19 00:39:39.409: INFO: stderr: "I0519 00:39:39.331921 3370 log.go:172] (0xc000928000) (0xc000713d60) Create stream\nI0519 00:39:39.332011 3370 log.go:172] (0xc000928000) (0xc000713d60) Stream added, broadcasting: 1\nI0519 00:39:39.333836 3370 log.go:172] (0xc000928000) Reply frame received for 1\nI0519 00:39:39.333864 3370 log.go:172] (0xc000928000) (0xc0006eae60) Create stream\nI0519 00:39:39.333876 3370 log.go:172] (0xc000928000) (0xc0006eae60) Stream added, broadcasting: 3\nI0519 00:39:39.334770 3370 log.go:172] (0xc000928000) Reply frame received for 3\nI0519 00:39:39.334822 3370 log.go:172] (0xc000928000) (0xc00049cf00) Create stream\nI0519 00:39:39.334846 3370 log.go:172] (0xc000928000) (0xc00049cf00) Stream added, broadcasting: 5\nI0519 00:39:39.335608 3370 log.go:172] (0xc000928000) Reply frame received for 5\nI0519 00:39:39.402695 3370 log.go:172] (0xc000928000) Data frame received for 3\nI0519 00:39:39.402763 3370 log.go:172] (0xc0006eae60) (3) Data frame handling\nI0519 00:39:39.402791 3370 log.go:172] (0xc000928000) Data frame received for 5\nI0519 00:39:39.402812 3370 log.go:172] (0xc00049cf00) (5) Data frame handling\nI0519 00:39:39.402821 3370 log.go:172] (0xc00049cf00) (5) Data frame sent\nI0519 00:39:39.402836 3370 log.go:172] (0xc000928000) Data frame received for 5\nI0519 00:39:39.402864 3370 log.go:172] (0xc00049cf00) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.175.125 80\nConnection to 10.106.175.125 80 port [tcp/http] succeeded!\nI0519 00:39:39.404516 3370 log.go:172] (0xc000928000) Data frame received for 1\nI0519 00:39:39.404554 3370 log.go:172] (0xc000713d60) (1) Data frame handling\nI0519 00:39:39.404579 3370 log.go:172] (0xc000713d60) (1) Data frame sent\nI0519 00:39:39.404614 3370 log.go:172] (0xc000928000) (0xc000713d60) Stream removed, broadcasting: 1\nI0519 00:39:39.404662 3370 log.go:172] (0xc000928000) Go away received\nI0519 00:39:39.405011 3370 log.go:172] (0xc000928000) (0xc000713d60) Stream removed, broadcasting: 1\nI0519 00:39:39.405038 3370 log.go:172] (0xc000928000) (0xc0006eae60) Stream removed, broadcasting: 3\nI0519 00:39:39.405047 3370 log.go:172] (0xc000928000) (0xc00049cf00) Stream removed, broadcasting: 5\n" May 19 00:39:39.409: INFO: stdout: "" May 19 00:39:39.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9127 execpod-affinityprh2p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30214' May 19 00:39:39.624: INFO: stderr: "I0519 00:39:39.543454 3391 log.go:172] (0xc000a81080) (0xc000ac01e0) Create stream\nI0519 00:39:39.543521 3391 log.go:172] (0xc000a81080) (0xc000ac01e0) Stream added, broadcasting: 1\nI0519 00:39:39.548655 3391 log.go:172] (0xc000a81080) Reply frame received for 1\nI0519 00:39:39.548724 3391 log.go:172] (0xc000a81080) (0xc000857e00) Create stream\nI0519 00:39:39.548743 3391 log.go:172] (0xc000a81080) (0xc000857e00) Stream added, broadcasting: 3\nI0519 00:39:39.550030 3391 log.go:172] (0xc000a81080) Reply frame received for 3\nI0519 00:39:39.550063 3391 log.go:172] (0xc000a81080) (0xc000731b80) Create stream\nI0519 00:39:39.550072 3391 log.go:172] (0xc000a81080) (0xc000731b80) Stream added, broadcasting: 5\nI0519 00:39:39.550946 3391 log.go:172] (0xc000a81080) Reply frame received for 5\nI0519 00:39:39.617051 3391 log.go:172] (0xc000a81080) Data frame received for 5\nI0519 00:39:39.617092 3391 log.go:172] (0xc000731b80) (5) Data frame handling\nI0519 00:39:39.617103 3391 log.go:172] (0xc000731b80) (5) Data frame sent\nI0519 00:39:39.617355 3391 log.go:172] (0xc000a81080) Data frame received for 5\nI0519 00:39:39.617381 3391 log.go:172] (0xc000731b80) (5) Data frame handling\nI0519 00:39:39.617399 3391 log.go:172] (0xc000a81080) Data frame received for 3\nI0519 00:39:39.617416 3391 log.go:172] (0xc000857e00) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30214\nConnection to 172.17.0.13 30214 port [tcp/30214] succeeded!\nI0519 00:39:39.618681 3391 log.go:172] (0xc000a81080) Data frame received for 1\nI0519 00:39:39.618729 3391 log.go:172] (0xc000ac01e0) (1) Data frame handling\nI0519 00:39:39.618752 3391 log.go:172] (0xc000ac01e0) (1) Data frame sent\nI0519 00:39:39.618770 3391 log.go:172] (0xc000a81080) (0xc000ac01e0) Stream removed, broadcasting: 1\nI0519 00:39:39.618786 3391 log.go:172] (0xc000a81080) Go away received\nI0519 00:39:39.619191 3391 log.go:172] (0xc000a81080) (0xc000ac01e0) Stream removed, broadcasting: 1\nI0519 00:39:39.619210 3391 log.go:172] (0xc000a81080) (0xc000857e00) Stream removed, broadcasting: 3\nI0519 00:39:39.619220 3391 log.go:172] (0xc000a81080) (0xc000731b80) Stream removed, broadcasting: 5\n" May 19 00:39:39.624: INFO: stdout: "" May 19 00:39:39.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9127 execpod-affinityprh2p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30214' May 19 00:39:39.841: INFO: stderr: "I0519 00:39:39.759615 3412 log.go:172] (0xc000a8d4a0) (0xc000848e60) Create stream\nI0519 00:39:39.759683 3412 log.go:172] (0xc000a8d4a0) (0xc000848e60) Stream added, broadcasting: 1\nI0519 00:39:39.763124 3412 log.go:172] (0xc000a8d4a0) Reply frame received for 1\nI0519 00:39:39.763180 3412 log.go:172] (0xc000a8d4a0) (0xc000718f00) Create stream\nI0519 00:39:39.763243 3412 log.go:172] (0xc000a8d4a0) (0xc000718f00) Stream added, broadcasting: 3\nI0519 00:39:39.764412 3412 log.go:172] (0xc000a8d4a0) Reply frame received for 3\nI0519 00:39:39.764457 3412 log.go:172] (0xc000a8d4a0) (0xc000a96140) Create stream\nI0519 00:39:39.764488 3412 log.go:172] (0xc000a8d4a0) (0xc000a96140) Stream added, broadcasting: 5\nI0519 00:39:39.765822 3412 log.go:172] (0xc000a8d4a0) Reply frame received for 5\nI0519 00:39:39.834535 3412 log.go:172] (0xc000a8d4a0) Data frame received for 3\nI0519 00:39:39.834593 3412 log.go:172] (0xc000a8d4a0) Data frame received for 5\nI0519 00:39:39.834648 3412 log.go:172] (0xc000a96140) (5) Data frame handling\nI0519 00:39:39.834676 3412 log.go:172] (0xc000a96140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30214\nConnection to 172.17.0.12 30214 port [tcp/30214] succeeded!\nI0519 00:39:39.834699 3412 log.go:172] (0xc000718f00) (3) Data frame handling\nI0519 00:39:39.834735 3412 log.go:172] (0xc000a8d4a0) Data frame received for 5\nI0519 00:39:39.834761 3412 log.go:172] (0xc000a96140) (5) Data frame handling\nI0519 00:39:39.836128 3412 log.go:172] (0xc000a8d4a0) Data frame received for 1\nI0519 00:39:39.836170 3412 log.go:172] (0xc000848e60) (1) Data frame handling\nI0519 00:39:39.836193 3412 log.go:172] (0xc000848e60) (1) Data frame sent\nI0519 00:39:39.836226 3412 log.go:172] (0xc000a8d4a0) (0xc000848e60) Stream removed, broadcasting: 1\nI0519 00:39:39.836271 3412 log.go:172] (0xc000a8d4a0) Go away received\nI0519 00:39:39.836589 3412 log.go:172] (0xc000a8d4a0) (0xc000848e60) Stream removed, broadcasting: 1\nI0519 00:39:39.836607 3412 log.go:172] (0xc000a8d4a0) (0xc000718f00) Stream removed, broadcasting: 3\nI0519 00:39:39.836615 3412 log.go:172] (0xc000a8d4a0) (0xc000a96140) Stream removed, broadcasting: 5\n" May 19 00:39:39.841: INFO: stdout: "" May 19 00:39:39.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9127 execpod-affinityprh2p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30214/ ; done' May 19 00:39:40.110: INFO: stderr: "I0519 00:39:39.970582 3431 log.go:172] (0xc000b536b0) (0xc000b3c5a0) Create stream\nI0519 00:39:39.970640 3431 log.go:172] (0xc000b536b0) (0xc000b3c5a0) Stream added, broadcasting: 1\nI0519 00:39:39.976229 3431 log.go:172] (0xc000b536b0) Reply frame received for 1\nI0519 00:39:39.976294 3431 log.go:172] (0xc000b536b0) (0xc00054c280) Create stream\nI0519 00:39:39.976308 3431 log.go:172] (0xc000b536b0) (0xc00054c280) Stream added, broadcasting: 3\nI0519 00:39:39.977582 3431 log.go:172] (0xc000b536b0) Reply frame received for 3\nI0519 00:39:39.977623 3431 log.go:172] (0xc000b536b0) (0xc00052e1e0) Create stream\nI0519 00:39:39.977634 3431 log.go:172] (0xc000b536b0) (0xc00052e1e0) Stream added, broadcasting: 5\nI0519 00:39:39.978446 3431 log.go:172] (0xc000b536b0) Reply frame received for 5\nI0519 00:39:40.030166 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.030201 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.030222 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.030241 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.030285 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.030313 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.034538 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.034555 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.034570 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.034900 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.034918 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.034941 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.035177 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.035202 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.035229 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.038510 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.038531 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.038545 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.038807 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.038824 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.038833 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.038842 3431 log.go:172] (0xc000b536b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0519 00:39:40.038856 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n 2 http://172.17.0.13:30214/\nI0519 00:39:40.038871 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.038886 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.038896 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.038912 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.044668 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.044688 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.044705 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.045387 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.045416 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.045429 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.045448 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.045459 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.045471 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.045483 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.045496 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.045515 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.048448 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.048466 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.048483 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.048966 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.048982 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.048993 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.049008 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.049026 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.049040 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.052814 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.052841 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.052854 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.053414 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.053431 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.053443 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.053459 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.053464 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.053470 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.056835 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.056871 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.056901 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.057269 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.057282 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.057288 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -qI0519 00:39:40.057452 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.057475 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.057486 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.057505 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.057515 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.057528 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.061274 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.061292 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.061306 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.062269 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.062298 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.062319 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.062327 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.062336 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.062345 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.062369 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.062383 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.062393 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.065725 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.065748 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.065777 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.066116 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.066129 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.066146 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.066171 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.066184 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.066203 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.070102 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.070130 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.070152 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.070511 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.070544 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.070556 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.070574 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.070584 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.070600 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.073988 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.074030 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.074062 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.074082 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.074093 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.074115 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.074139 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.074159 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.074182 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.078571 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.078594 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.078615 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.079011 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.079031 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.079042 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.079058 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.079080 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.079101 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.082699 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.082720 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.082754 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.083014 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.083052 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.083090 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.083118 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.083142 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.083173 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.087384 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.087408 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.087426 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.087773 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.087791 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.087854 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.087901 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.087922 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.087945 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.087970 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.087994 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.088013 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.091782 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.091792 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.091798 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.092209 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.092224 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.092234 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n+ echo\n+ curl -qI0519 00:39:40.092320 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.092364 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.092390 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.092427 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.092452 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.092472 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\n -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.096552 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.096562 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.096567 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.096956 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.096967 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.096972 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.096985 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.097002 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.097020 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.097031 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.097041 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30214/\nI0519 00:39:40.097062 3431 log.go:172] (0xc00052e1e0) (5) Data frame sent\nI0519 00:39:40.101753 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.101770 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.101788 3431 log.go:172] (0xc00054c280) (3) Data frame sent\nI0519 00:39:40.102010 3431 log.go:172] (0xc000b536b0) Data frame received for 3\nI0519 00:39:40.102023 3431 log.go:172] (0xc00054c280) (3) Data frame handling\nI0519 00:39:40.102166 3431 log.go:172] (0xc000b536b0) Data frame received for 5\nI0519 00:39:40.102193 3431 log.go:172] (0xc00052e1e0) (5) Data frame handling\nI0519 00:39:40.104029 3431 log.go:172] (0xc000b536b0) Data frame received for 1\nI0519 00:39:40.104061 3431 log.go:172] (0xc000b3c5a0) (1) Data frame handling\nI0519 00:39:40.104070 3431 log.go:172] (0xc000b3c5a0) (1) Data frame sent\nI0519 00:39:40.104087 3431 log.go:172] (0xc000b536b0) (0xc000b3c5a0) Stream removed, broadcasting: 1\nI0519 00:39:40.104101 3431 log.go:172] (0xc000b536b0) Go away received\nI0519 00:39:40.104528 3431 log.go:172] (0xc000b536b0) (0xc000b3c5a0) Stream removed, broadcasting: 1\nI0519 00:39:40.104568 3431 log.go:172] (0xc000b536b0) (0xc00054c280) Stream removed, broadcasting: 3\nI0519 00:39:40.104591 3431 log.go:172] (0xc000b536b0) (0xc00052e1e0) Stream removed, broadcasting: 5\n" May 19 00:39:40.111: INFO: stdout: "\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf\naffinity-nodeport-945bf" May 19 00:39:40.111: INFO: Received response from host: May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Received response from host: affinity-nodeport-945bf May 19 00:39:40.111: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9127, will wait for the garbage collector to delete the pods May 19 00:39:40.226: INFO: Deleting ReplicationController affinity-nodeport took: 6.956468ms May 19 00:39:40.626: INFO: Terminating ReplicationController affinity-nodeport pods took: 400.203284ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:55.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9127" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.301 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":194,"skipped":3262,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:55.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8964ae3c-cec5-4c31-907f-d8eb2f0a0b94 STEP: Creating a pod to test consume configMaps May 19 00:39:55.118: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb" in namespace "configmap-5470" to be "Succeeded or Failed" May 19 00:39:55.148: INFO: Pod "pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.060815ms May 19 00:39:57.244: INFO: Pod "pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126157031s May 19 00:39:59.249: INFO: Pod "pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131353903s STEP: Saw pod success May 19 00:39:59.249: INFO: Pod "pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb" satisfied condition "Succeeded or Failed" May 19 00:39:59.251: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb container configmap-volume-test: STEP: delete the pod May 19 00:39:59.413: INFO: Waiting for pod pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb to disappear May 19 00:39:59.440: INFO: Pod pod-configmaps-6d3bb93d-6b9a-48f8-9384-b98f0efdedfb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:59.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5470" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":195,"skipped":3264,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:59.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 19 00:39:59.555: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix761407166/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:39:59.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1961" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":196,"skipped":3270,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:39:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 19 00:39:59.767: INFO: >>> kubeConfig: /root/.kube/config May 19 00:40:02.755: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:40:12.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9425" for this suite. • [SLOW TEST:12.780 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":197,"skipped":3276,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:40:12.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3451 May 19 00:40:16.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 19 00:40:19.865: INFO: stderr: "I0519 00:40:19.765714 3470 log.go:172] (0xc000668840) (0xc0006517c0) Create stream\nI0519 00:40:19.765763 3470 log.go:172] (0xc000668840) (0xc0006517c0) Stream added, broadcasting: 1\nI0519 00:40:19.768329 3470 log.go:172] (0xc000668840) Reply frame received for 1\nI0519 00:40:19.768366 3470 log.go:172] (0xc000668840) (0xc000651860) Create stream\nI0519 00:40:19.768378 3470 log.go:172] (0xc000668840) (0xc000651860) Stream added, broadcasting: 3\nI0519 00:40:19.769607 3470 log.go:172] (0xc000668840) Reply frame received for 3\nI0519 00:40:19.769652 3470 log.go:172] (0xc000668840) (0xc0006220a0) Create stream\nI0519 00:40:19.769678 3470 log.go:172] (0xc000668840) (0xc0006220a0) Stream added, broadcasting: 5\nI0519 00:40:19.770603 3470 log.go:172] (0xc000668840) Reply frame received for 5\nI0519 00:40:19.853720 3470 log.go:172] (0xc000668840) Data frame received for 5\nI0519 00:40:19.853747 3470 log.go:172] (0xc0006220a0) (5) Data frame handling\nI0519 00:40:19.853764 3470 log.go:172] (0xc0006220a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0519 00:40:19.858102 3470 log.go:172] (0xc000668840) Data frame received for 3\nI0519 00:40:19.858135 3470 log.go:172] (0xc000651860) (3) Data frame handling\nI0519 00:40:19.858163 3470 log.go:172] (0xc000651860) (3) Data frame sent\nI0519 00:40:19.858675 3470 log.go:172] (0xc000668840) Data frame received for 5\nI0519 00:40:19.858700 3470 log.go:172] (0xc0006220a0) (5) Data frame handling\nI0519 00:40:19.858722 3470 log.go:172] (0xc000668840) Data frame received for 3\nI0519 00:40:19.858749 3470 log.go:172] (0xc000651860) (3) Data frame handling\nI0519 00:40:19.860538 3470 log.go:172] (0xc000668840) Data frame received for 1\nI0519 00:40:19.860553 3470 log.go:172] (0xc0006517c0) (1) Data frame handling\nI0519 00:40:19.860560 3470 log.go:172] (0xc0006517c0) (1) Data frame sent\nI0519 00:40:19.860569 3470 log.go:172] (0xc000668840) (0xc0006517c0) Stream removed, broadcasting: 1\nI0519 00:40:19.860587 3470 log.go:172] (0xc000668840) Go away received\nI0519 00:40:19.860928 3470 log.go:172] (0xc000668840) (0xc0006517c0) Stream removed, broadcasting: 1\nI0519 00:40:19.860948 3470 log.go:172] (0xc000668840) (0xc000651860) Stream removed, broadcasting: 3\nI0519 00:40:19.860959 3470 log.go:172] (0xc000668840) (0xc0006220a0) Stream removed, broadcasting: 5\n" May 19 00:40:19.865: INFO: stdout: "iptables" May 19 00:40:19.865: INFO: proxyMode: iptables May 19 00:40:19.871: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:40:19.905: INFO: Pod kube-proxy-mode-detector still exists May 19 00:40:21.905: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:40:21.909: INFO: Pod kube-proxy-mode-detector still exists May 19 00:40:23.905: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:40:23.914: INFO: Pod kube-proxy-mode-detector still exists May 19 00:40:25.905: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 19 00:40:25.908: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3451 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3451 I0519 00:40:26.011440 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3451, replica count: 3 I0519 00:40:29.061915 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:40:32.062153 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:40:32.069: INFO: Creating new exec pod May 19 00:40:37.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpod-affinitytrfvj -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 19 00:40:37.347: INFO: stderr: "I0519 00:40:37.232567 3498 log.go:172] (0xc000a791e0) (0xc0007054a0) Create stream\nI0519 00:40:37.232626 3498 log.go:172] (0xc000a791e0) (0xc0007054a0) Stream added, broadcasting: 1\nI0519 00:40:37.237542 3498 log.go:172] (0xc000a791e0) Reply frame received for 1\nI0519 00:40:37.238339 3498 log.go:172] (0xc000a791e0) (0xc000734e60) Create stream\nI0519 00:40:37.238436 3498 log.go:172] (0xc000a791e0) (0xc000734e60) Stream added, broadcasting: 3\nI0519 00:40:37.242031 3498 log.go:172] (0xc000a791e0) Reply frame received for 3\nI0519 00:40:37.242094 3498 log.go:172] (0xc000a791e0) (0xc000223180) Create stream\nI0519 00:40:37.242125 3498 log.go:172] (0xc000a791e0) (0xc000223180) Stream added, broadcasting: 5\nI0519 00:40:37.243257 3498 log.go:172] (0xc000a791e0) Reply frame received for 5\nI0519 00:40:37.317560 3498 log.go:172] (0xc000a791e0) Data frame received for 5\nI0519 00:40:37.317582 3498 log.go:172] (0xc000223180) (5) Data frame handling\nI0519 00:40:37.317595 3498 log.go:172] (0xc000223180) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0519 00:40:37.340135 3498 log.go:172] (0xc000a791e0) Data frame received for 5\nI0519 00:40:37.340170 3498 log.go:172] (0xc000223180) (5) Data frame handling\nI0519 00:40:37.340186 3498 log.go:172] (0xc000223180) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0519 00:40:37.340294 3498 log.go:172] (0xc000a791e0) Data frame received for 3\nI0519 00:40:37.340315 3498 log.go:172] (0xc000734e60) (3) Data frame handling\nI0519 00:40:37.340373 3498 log.go:172] (0xc000a791e0) Data frame received for 5\nI0519 00:40:37.340385 3498 log.go:172] (0xc000223180) (5) Data frame handling\nI0519 00:40:37.343495 3498 log.go:172] (0xc000a791e0) Data frame received for 1\nI0519 00:40:37.343509 3498 log.go:172] (0xc0007054a0) (1) Data frame handling\nI0519 00:40:37.343521 3498 log.go:172] (0xc0007054a0) (1) Data frame sent\nI0519 00:40:37.343532 3498 log.go:172] (0xc000a791e0) (0xc0007054a0) Stream removed, broadcasting: 1\nI0519 00:40:37.343544 3498 log.go:172] (0xc000a791e0) Go away received\nI0519 00:40:37.343837 3498 log.go:172] (0xc000a791e0) (0xc0007054a0) Stream removed, broadcasting: 1\nI0519 00:40:37.343853 3498 log.go:172] (0xc000a791e0) (0xc000734e60) Stream removed, broadcasting: 3\nI0519 00:40:37.343862 3498 log.go:172] (0xc000a791e0) (0xc000223180) Stream removed, broadcasting: 5\n" May 19 00:40:37.348: INFO: stdout: "" May 19 00:40:37.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpod-affinitytrfvj -- /bin/sh -x -c nc -zv -t -w 2 10.111.249.10 80' May 19 00:40:37.550: INFO: stderr: "I0519 00:40:37.472934 3518 log.go:172] (0xc0009440b0) (0xc00047e1e0) Create stream\nI0519 00:40:37.472984 3518 log.go:172] (0xc0009440b0) (0xc00047e1e0) Stream added, broadcasting: 1\nI0519 00:40:37.475180 3518 log.go:172] (0xc0009440b0) Reply frame received for 1\nI0519 00:40:37.475215 3518 log.go:172] (0xc0009440b0) (0xc000446dc0) Create stream\nI0519 00:40:37.475228 3518 log.go:172] (0xc0009440b0) (0xc000446dc0) Stream added, broadcasting: 3\nI0519 00:40:37.476013 3518 log.go:172] (0xc0009440b0) Reply frame received for 3\nI0519 00:40:37.476047 3518 log.go:172] (0xc0009440b0) (0xc00032c0a0) Create stream\nI0519 00:40:37.476056 3518 log.go:172] (0xc0009440b0) (0xc00032c0a0) Stream added, broadcasting: 5\nI0519 00:40:37.476978 3518 log.go:172] (0xc0009440b0) Reply frame received for 5\nI0519 00:40:37.542415 3518 log.go:172] (0xc0009440b0) Data frame received for 5\nI0519 00:40:37.542453 3518 log.go:172] (0xc00032c0a0) (5) Data frame handling\nI0519 00:40:37.542480 3518 log.go:172] (0xc00032c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.111.249.10 80\nI0519 00:40:37.542703 3518 log.go:172] (0xc0009440b0) Data frame received for 5\nI0519 00:40:37.542732 3518 log.go:172] (0xc00032c0a0) (5) Data frame handling\nI0519 00:40:37.542756 3518 log.go:172] (0xc00032c0a0) (5) Data frame sent\nConnection to 10.111.249.10 80 port [tcp/http] succeeded!\nI0519 00:40:37.543093 3518 log.go:172] (0xc0009440b0) Data frame received for 3\nI0519 00:40:37.543109 3518 log.go:172] (0xc000446dc0) (3) Data frame handling\nI0519 00:40:37.543141 3518 log.go:172] (0xc0009440b0) Data frame received for 5\nI0519 00:40:37.543150 3518 log.go:172] (0xc00032c0a0) (5) Data frame handling\nI0519 00:40:37.544617 3518 log.go:172] (0xc0009440b0) Data frame received for 1\nI0519 00:40:37.544653 3518 log.go:172] (0xc00047e1e0) (1) Data frame handling\nI0519 00:40:37.544707 3518 log.go:172] (0xc00047e1e0) (1) Data frame sent\nI0519 00:40:37.544771 3518 log.go:172] (0xc0009440b0) (0xc00047e1e0) Stream removed, broadcasting: 1\nI0519 00:40:37.544802 3518 log.go:172] (0xc0009440b0) Go away received\nI0519 00:40:37.545442 3518 log.go:172] (0xc0009440b0) (0xc00047e1e0) Stream removed, broadcasting: 1\nI0519 00:40:37.545466 3518 log.go:172] (0xc0009440b0) (0xc000446dc0) Stream removed, broadcasting: 3\nI0519 00:40:37.545487 3518 log.go:172] (0xc0009440b0) (0xc00032c0a0) Stream removed, broadcasting: 5\n" May 19 00:40:37.550: INFO: stdout: "" May 19 00:40:37.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpod-affinitytrfvj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.249.10:80/ ; done' May 19 00:40:37.860: INFO: stderr: "I0519 00:40:37.688806 3539 log.go:172] (0xc0009574a0) (0xc000900780) Create stream\nI0519 00:40:37.688855 3539 log.go:172] (0xc0009574a0) (0xc000900780) Stream added, broadcasting: 1\nI0519 00:40:37.694461 3539 log.go:172] (0xc0009574a0) Reply frame received for 1\nI0519 00:40:37.694524 3539 log.go:172] (0xc0009574a0) (0xc000510d20) Create stream\nI0519 00:40:37.694549 3539 log.go:172] (0xc0009574a0) (0xc000510d20) Stream added, broadcasting: 3\nI0519 00:40:37.695407 3539 log.go:172] (0xc0009574a0) Reply frame received for 3\nI0519 00:40:37.695438 3539 log.go:172] (0xc0009574a0) (0xc00014f680) Create stream\nI0519 00:40:37.695452 3539 log.go:172] (0xc0009574a0) (0xc00014f680) Stream added, broadcasting: 5\nI0519 00:40:37.696408 3539 log.go:172] (0xc0009574a0) Reply frame received for 5\nI0519 00:40:37.763448 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.763497 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.763514 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.763544 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.763554 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.763566 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.768447 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.768468 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.768485 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.768913 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.768931 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.768947 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.769091 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.769299 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.769338 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.775214 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.775236 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.775256 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.775795 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.775829 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.775855 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.775870 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.775886 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.775899 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.782367 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.782385 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.782395 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.783241 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.783256 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.783263 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.783273 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.783279 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.783285 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.789233 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.789260 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.789274 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.789728 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.789755 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.789777 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.789792 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.789809 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.789821 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.794079 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.794102 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.794121 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.794712 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.794733 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.794742 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.794756 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.794790 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.794805 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.799322 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.799337 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.799356 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.799756 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.799775 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.799796 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.799809 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.799817 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.799828 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.804649 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.804659 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.804664 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.805028 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.805044 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.805050 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.805098 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.805237 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.805257 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.810856 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.810866 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.810872 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.811390 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.811400 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.811406 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.811427 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.811443 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.811464 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.815244 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.815254 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.815260 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.815600 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.815608 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.815614 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.815630 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.815639 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.815647 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.819156 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.819191 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.819222 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.819441 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.819457 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.819463 3539 log.go:172] (0xc00014f680) (5) Data frame sent\nI0519 00:40:37.819471 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.819475 3539 log.go:172] (0xc000510d20) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.819480 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.823894 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.823915 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.823932 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.824391 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.824401 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.824407 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.824415 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.824428 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.824435 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.828577 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.828597 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.828614 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.829275 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.829315 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.829333 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.829351 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.829378 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.829404 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.833712 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.833726 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.833734 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.834218 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.834233 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.834251 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.834258 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.834290 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.834319 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.842198 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.842216 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.842233 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.842686 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.842748 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.842790 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.842866 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.842897 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.842916 3539 log.go:172] (0xc00014f680) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.846177 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.846206 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.846228 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.846598 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.846613 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.846630 3539 log.go:172] (0xc00014f680) (5) Data frame sent\nI0519 00:40:37.846641 3539 log.go:172] (0xc0009574a0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:37.846651 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.846685 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.851300 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.851316 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.851333 3539 log.go:172] (0xc000510d20) (3) Data frame sent\nI0519 00:40:37.852079 3539 log.go:172] (0xc0009574a0) Data frame received for 5\nI0519 00:40:37.852101 3539 log.go:172] (0xc00014f680) (5) Data frame handling\nI0519 00:40:37.852114 3539 log.go:172] (0xc0009574a0) Data frame received for 3\nI0519 00:40:37.852132 3539 log.go:172] (0xc000510d20) (3) Data frame handling\nI0519 00:40:37.853843 3539 log.go:172] (0xc0009574a0) Data frame received for 1\nI0519 00:40:37.853862 3539 log.go:172] (0xc000900780) (1) Data frame handling\nI0519 00:40:37.853876 3539 log.go:172] (0xc000900780) (1) Data frame sent\nI0519 00:40:37.854040 3539 log.go:172] (0xc0009574a0) (0xc000900780) Stream removed, broadcasting: 1\nI0519 00:40:37.854075 3539 log.go:172] (0xc0009574a0) Go away received\nI0519 00:40:37.854375 3539 log.go:172] (0xc0009574a0) (0xc000900780) Stream removed, broadcasting: 1\nI0519 00:40:37.854394 3539 log.go:172] (0xc0009574a0) (0xc000510d20) Stream removed, broadcasting: 3\nI0519 00:40:37.854403 3539 log.go:172] (0xc0009574a0) (0xc00014f680) Stream removed, broadcasting: 5\n" May 19 00:40:37.861: INFO: stdout: "\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx\naffinity-clusterip-timeout-8k7hx" May 19 00:40:37.861: INFO: Received response from host: May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Received response from host: affinity-clusterip-timeout-8k7hx May 19 00:40:37.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpod-affinitytrfvj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.249.10:80/' May 19 00:40:38.075: INFO: stderr: "I0519 00:40:37.995290 3560 log.go:172] (0xc00003b8c0) (0xc0006cd540) Create stream\nI0519 00:40:37.995359 3560 log.go:172] (0xc00003b8c0) (0xc0006cd540) Stream added, broadcasting: 1\nI0519 00:40:37.999586 3560 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0519 00:40:37.999635 3560 log.go:172] (0xc00003b8c0) (0xc0006a4d20) Create stream\nI0519 00:40:37.999655 3560 log.go:172] (0xc00003b8c0) (0xc0006a4d20) Stream added, broadcasting: 3\nI0519 00:40:38.000900 3560 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0519 00:40:38.000938 3560 log.go:172] (0xc00003b8c0) (0xc00069c5a0) Create stream\nI0519 00:40:38.000952 3560 log.go:172] (0xc00003b8c0) (0xc00069c5a0) Stream added, broadcasting: 5\nI0519 00:40:38.002105 3560 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0519 00:40:38.065250 3560 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0519 00:40:38.065278 3560 log.go:172] (0xc00069c5a0) (5) Data frame handling\nI0519 00:40:38.065301 3560 log.go:172] (0xc00069c5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:38.068085 3560 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0519 00:40:38.068108 3560 log.go:172] (0xc0006a4d20) (3) Data frame handling\nI0519 00:40:38.068126 3560 log.go:172] (0xc0006a4d20) (3) Data frame sent\nI0519 00:40:38.068532 3560 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0519 00:40:38.068556 3560 log.go:172] (0xc00069c5a0) (5) Data frame handling\nI0519 00:40:38.068640 3560 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0519 00:40:38.068667 3560 log.go:172] (0xc0006a4d20) (3) Data frame handling\nI0519 00:40:38.070206 3560 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0519 00:40:38.070220 3560 log.go:172] (0xc0006cd540) (1) Data frame handling\nI0519 00:40:38.070231 3560 log.go:172] (0xc0006cd540) (1) Data frame sent\nI0519 00:40:38.070242 3560 log.go:172] (0xc00003b8c0) (0xc0006cd540) Stream removed, broadcasting: 1\nI0519 00:40:38.070329 3560 log.go:172] (0xc00003b8c0) Go away received\nI0519 00:40:38.070697 3560 log.go:172] (0xc00003b8c0) (0xc0006cd540) Stream removed, broadcasting: 1\nI0519 00:40:38.070726 3560 log.go:172] (0xc00003b8c0) (0xc0006a4d20) Stream removed, broadcasting: 3\nI0519 00:40:38.070745 3560 log.go:172] (0xc00003b8c0) (0xc00069c5a0) Stream removed, broadcasting: 5\n" May 19 00:40:38.075: INFO: stdout: "affinity-clusterip-timeout-8k7hx" May 19 00:40:53.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpod-affinitytrfvj -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.249.10:80/' May 19 00:40:53.328: INFO: stderr: "I0519 00:40:53.214952 3582 log.go:172] (0xc00003b8c0) (0xc000bc4460) Create stream\nI0519 00:40:53.215047 3582 log.go:172] (0xc00003b8c0) (0xc000bc4460) Stream added, broadcasting: 1\nI0519 00:40:53.220773 3582 log.go:172] (0xc00003b8c0) Reply frame received for 1\nI0519 00:40:53.220820 3582 log.go:172] (0xc00003b8c0) (0xc0005781e0) Create stream\nI0519 00:40:53.220837 3582 log.go:172] (0xc00003b8c0) (0xc0005781e0) Stream added, broadcasting: 3\nI0519 00:40:53.222197 3582 log.go:172] (0xc00003b8c0) Reply frame received for 3\nI0519 00:40:53.222240 3582 log.go:172] (0xc00003b8c0) (0xc0005441e0) Create stream\nI0519 00:40:53.222255 3582 log.go:172] (0xc00003b8c0) (0xc0005441e0) Stream added, broadcasting: 5\nI0519 00:40:53.223203 3582 log.go:172] (0xc00003b8c0) Reply frame received for 5\nI0519 00:40:53.320090 3582 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0519 00:40:53.320117 3582 log.go:172] (0xc0005441e0) (5) Data frame handling\nI0519 00:40:53.320142 3582 log.go:172] (0xc0005441e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.249.10:80/\nI0519 00:40:53.321650 3582 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0519 00:40:53.321686 3582 log.go:172] (0xc0005781e0) (3) Data frame handling\nI0519 00:40:53.321714 3582 log.go:172] (0xc0005781e0) (3) Data frame sent\nI0519 00:40:53.322108 3582 log.go:172] (0xc00003b8c0) Data frame received for 3\nI0519 00:40:53.322135 3582 log.go:172] (0xc0005781e0) (3) Data frame handling\nI0519 00:40:53.322167 3582 log.go:172] (0xc00003b8c0) Data frame received for 5\nI0519 00:40:53.322189 3582 log.go:172] (0xc0005441e0) (5) Data frame handling\nI0519 00:40:53.323885 3582 log.go:172] (0xc00003b8c0) Data frame received for 1\nI0519 00:40:53.323909 3582 log.go:172] (0xc000bc4460) (1) Data frame handling\nI0519 00:40:53.323921 3582 log.go:172] (0xc000bc4460) (1) Data frame sent\nI0519 00:40:53.323937 3582 log.go:172] (0xc00003b8c0) (0xc000bc4460) Stream removed, broadcasting: 1\nI0519 00:40:53.324009 3582 log.go:172] (0xc00003b8c0) Go away received\nI0519 00:40:53.324256 3582 log.go:172] (0xc00003b8c0) (0xc000bc4460) Stream removed, broadcasting: 1\nI0519 00:40:53.324275 3582 log.go:172] (0xc00003b8c0) (0xc0005781e0) Stream removed, broadcasting: 3\nI0519 00:40:53.324288 3582 log.go:172] (0xc00003b8c0) (0xc0005441e0) Stream removed, broadcasting: 5\n" May 19 00:40:53.328: INFO: stdout: "affinity-clusterip-timeout-45gbj" May 19 00:40:53.328: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3451, will wait for the garbage collector to delete the pods May 19 00:40:53.453: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.689818ms May 19 00:40:53.854: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.241321ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:41:05.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3451" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:53.124 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":198,"skipped":3284,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:41:05.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299 May 19 00:41:05.669: INFO: Pod name my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299: Found 0 pods out of 1 May 19 00:41:10.672: INFO: Pod name my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299: Found 1 pods out of 1 May 19 00:41:10.672: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299" are running May 19 00:41:10.675: INFO: Pod "my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299-mpcb7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:41:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:41:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:41:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-19 00:41:05 +0000 UTC Reason: Message:}]) May 19 00:41:10.676: INFO: Trying to dial the pod May 19 00:41:15.686: INFO: Controller my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299: Got expected result from replica 1 [my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299-mpcb7]: "my-hostname-basic-09f7b8aa-2d81-4d77-8a87-92458c681299-mpcb7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:41:15.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4474" for this suite. • [SLOW TEST:10.122 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":199,"skipped":3291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:41:15.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 19 00:41:25.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:25.850: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:25.873441 7 log.go:172] (0xc002bee790) (0xc002d40500) Create stream I0519 00:41:25.873471 7 log.go:172] (0xc002bee790) (0xc002d40500) Stream added, broadcasting: 1 I0519 00:41:25.874934 7 log.go:172] (0xc002bee790) Reply frame received for 1 I0519 00:41:25.874982 7 log.go:172] (0xc002bee790) (0xc0013ee780) Create stream I0519 00:41:25.874996 7 log.go:172] (0xc002bee790) (0xc0013ee780) Stream added, broadcasting: 3 I0519 00:41:25.875804 7 log.go:172] (0xc002bee790) Reply frame received for 3 I0519 00:41:25.875840 7 log.go:172] (0xc002bee790) (0xc00131a960) Create stream I0519 00:41:25.875854 7 log.go:172] (0xc002bee790) (0xc00131a960) Stream added, broadcasting: 5 I0519 00:41:25.876657 7 log.go:172] (0xc002bee790) Reply frame received for 5 I0519 00:41:25.946616 7 log.go:172] (0xc002bee790) Data frame received for 3 I0519 00:41:25.946653 7 log.go:172] (0xc0013ee780) (3) Data frame handling I0519 00:41:25.946663 7 log.go:172] (0xc0013ee780) (3) Data frame sent I0519 00:41:25.946671 7 log.go:172] (0xc002bee790) Data frame received for 3 I0519 00:41:25.946678 7 log.go:172] (0xc0013ee780) (3) Data frame handling I0519 00:41:25.946702 7 log.go:172] (0xc002bee790) Data frame received for 5 I0519 00:41:25.946712 7 log.go:172] (0xc00131a960) (5) Data frame handling I0519 00:41:25.948139 7 log.go:172] (0xc002bee790) Data frame received for 1 I0519 00:41:25.948174 7 log.go:172] (0xc002d40500) (1) Data frame handling I0519 00:41:25.948204 7 log.go:172] (0xc002d40500) (1) Data frame sent I0519 00:41:25.948228 7 log.go:172] (0xc002bee790) (0xc002d40500) Stream removed, broadcasting: 1 I0519 00:41:25.948357 7 log.go:172] (0xc002bee790) (0xc002d40500) Stream removed, broadcasting: 1 I0519 00:41:25.948381 7 log.go:172] (0xc002bee790) (0xc0013ee780) Stream removed, broadcasting: 3 I0519 00:41:25.948466 7 log.go:172] (0xc002bee790) Go away received I0519 00:41:25.948604 7 log.go:172] (0xc002bee790) (0xc00131a960) Stream removed, broadcasting: 5 May 19 00:41:25.948: INFO: Exec stderr: "" May 19 00:41:25.948: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:25.948: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:25.975175 7 log.go:172] (0xc001fc6630) (0xc00131adc0) Create stream I0519 00:41:25.975207 7 log.go:172] (0xc001fc6630) (0xc00131adc0) Stream added, broadcasting: 1 I0519 00:41:25.976798 7 log.go:172] (0xc001fc6630) Reply frame received for 1 I0519 00:41:25.976839 7 log.go:172] (0xc001fc6630) (0xc002c46000) Create stream I0519 00:41:25.976857 7 log.go:172] (0xc001fc6630) (0xc002c46000) Stream added, broadcasting: 3 I0519 00:41:25.977900 7 log.go:172] (0xc001fc6630) Reply frame received for 3 I0519 00:41:25.977931 7 log.go:172] (0xc001fc6630) (0xc001a9b860) Create stream I0519 00:41:25.977941 7 log.go:172] (0xc001fc6630) (0xc001a9b860) Stream added, broadcasting: 5 I0519 00:41:25.978733 7 log.go:172] (0xc001fc6630) Reply frame received for 5 I0519 00:41:26.031846 7 log.go:172] (0xc001fc6630) Data frame received for 3 I0519 00:41:26.031893 7 log.go:172] (0xc002c46000) (3) Data frame handling I0519 00:41:26.031938 7 log.go:172] (0xc002c46000) (3) Data frame sent I0519 00:41:26.031970 7 log.go:172] (0xc001fc6630) Data frame received for 3 I0519 00:41:26.032059 7 log.go:172] (0xc002c46000) (3) Data frame handling I0519 00:41:26.032130 7 log.go:172] (0xc001fc6630) Data frame received for 5 I0519 00:41:26.032181 7 log.go:172] (0xc001a9b860) (5) Data frame handling I0519 00:41:26.033931 7 log.go:172] (0xc001fc6630) Data frame received for 1 I0519 00:41:26.033961 7 log.go:172] (0xc00131adc0) (1) Data frame handling I0519 00:41:26.033997 7 log.go:172] (0xc00131adc0) (1) Data frame sent I0519 00:41:26.034021 7 log.go:172] (0xc001fc6630) (0xc00131adc0) Stream removed, broadcasting: 1 I0519 00:41:26.034103 7 log.go:172] (0xc001fc6630) Go away received I0519 00:41:26.034157 7 log.go:172] (0xc001fc6630) (0xc00131adc0) Stream removed, broadcasting: 1 I0519 00:41:26.034177 7 log.go:172] (0xc001fc6630) (0xc002c46000) Stream removed, broadcasting: 3 I0519 00:41:26.034188 7 log.go:172] (0xc001fc6630) (0xc001a9b860) Stream removed, broadcasting: 5 May 19 00:41:26.034: INFO: Exec stderr: "" May 19 00:41:26.034: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.034: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.063385 7 log.go:172] (0xc001fc6c60) (0xc00131b540) Create stream I0519 00:41:26.063436 7 log.go:172] (0xc001fc6c60) (0xc00131b540) Stream added, broadcasting: 1 I0519 00:41:26.071585 7 log.go:172] (0xc001fc6c60) Reply frame received for 1 I0519 00:41:26.071635 7 log.go:172] (0xc001fc6c60) (0xc0013ee960) Create stream I0519 00:41:26.071648 7 log.go:172] (0xc001fc6c60) (0xc0013ee960) Stream added, broadcasting: 3 I0519 00:41:26.077642 7 log.go:172] (0xc001fc6c60) Reply frame received for 3 I0519 00:41:26.077681 7 log.go:172] (0xc001fc6c60) (0xc00131b5e0) Create stream I0519 00:41:26.077743 7 log.go:172] (0xc001fc6c60) (0xc00131b5e0) Stream added, broadcasting: 5 I0519 00:41:26.078964 7 log.go:172] (0xc001fc6c60) Reply frame received for 5 I0519 00:41:26.143141 7 log.go:172] (0xc001fc6c60) Data frame received for 3 I0519 00:41:26.143180 7 log.go:172] (0xc0013ee960) (3) Data frame handling I0519 00:41:26.143192 7 log.go:172] (0xc0013ee960) (3) Data frame sent I0519 00:41:26.143211 7 log.go:172] (0xc001fc6c60) Data frame received for 3 I0519 00:41:26.143222 7 log.go:172] (0xc0013ee960) (3) Data frame handling I0519 00:41:26.143249 7 log.go:172] (0xc001fc6c60) Data frame received for 5 I0519 00:41:26.143267 7 log.go:172] (0xc00131b5e0) (5) Data frame handling I0519 00:41:26.144699 7 log.go:172] (0xc001fc6c60) Data frame received for 1 I0519 00:41:26.144747 7 log.go:172] (0xc00131b540) (1) Data frame handling I0519 00:41:26.144774 7 log.go:172] (0xc00131b540) (1) Data frame sent I0519 00:41:26.144793 7 log.go:172] (0xc001fc6c60) (0xc00131b540) Stream removed, broadcasting: 1 I0519 00:41:26.144810 7 log.go:172] (0xc001fc6c60) Go away received I0519 00:41:26.144953 7 log.go:172] (0xc001fc6c60) (0xc00131b540) Stream removed, broadcasting: 1 I0519 00:41:26.145001 7 log.go:172] (0xc001fc6c60) (0xc0013ee960) Stream removed, broadcasting: 3 I0519 00:41:26.145037 7 log.go:172] (0xc001fc6c60) (0xc00131b5e0) Stream removed, broadcasting: 5 May 19 00:41:26.145: INFO: Exec stderr: "" May 19 00:41:26.145: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.145: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.169810 7 log.go:172] (0xc002beedc0) (0xc002d406e0) Create stream I0519 00:41:26.169843 7 log.go:172] (0xc002beedc0) (0xc002d406e0) Stream added, broadcasting: 1 I0519 00:41:26.171316 7 log.go:172] (0xc002beedc0) Reply frame received for 1 I0519 00:41:26.171346 7 log.go:172] (0xc002beedc0) (0xc002c460a0) Create stream I0519 00:41:26.171367 7 log.go:172] (0xc002beedc0) (0xc002c460a0) Stream added, broadcasting: 3 I0519 00:41:26.172210 7 log.go:172] (0xc002beedc0) Reply frame received for 3 I0519 00:41:26.172235 7 log.go:172] (0xc002beedc0) (0xc00131b680) Create stream I0519 00:41:26.172244 7 log.go:172] (0xc002beedc0) (0xc00131b680) Stream added, broadcasting: 5 I0519 00:41:26.173032 7 log.go:172] (0xc002beedc0) Reply frame received for 5 I0519 00:41:26.246597 7 log.go:172] (0xc002beedc0) Data frame received for 5 I0519 00:41:26.246642 7 log.go:172] (0xc00131b680) (5) Data frame handling I0519 00:41:26.246665 7 log.go:172] (0xc002beedc0) Data frame received for 3 I0519 00:41:26.246697 7 log.go:172] (0xc002c460a0) (3) Data frame handling I0519 00:41:26.246718 7 log.go:172] (0xc002c460a0) (3) Data frame sent I0519 00:41:26.246729 7 log.go:172] (0xc002beedc0) Data frame received for 3 I0519 00:41:26.246738 7 log.go:172] (0xc002c460a0) (3) Data frame handling I0519 00:41:26.247956 7 log.go:172] (0xc002beedc0) Data frame received for 1 I0519 00:41:26.247975 7 log.go:172] (0xc002d406e0) (1) Data frame handling I0519 00:41:26.247985 7 log.go:172] (0xc002d406e0) (1) Data frame sent I0519 00:41:26.248002 7 log.go:172] (0xc002beedc0) (0xc002d406e0) Stream removed, broadcasting: 1 I0519 00:41:26.248015 7 log.go:172] (0xc002beedc0) Go away received I0519 00:41:26.248154 7 log.go:172] (0xc002beedc0) (0xc002d406e0) Stream removed, broadcasting: 1 I0519 00:41:26.248211 7 log.go:172] (0xc002beedc0) (0xc002c460a0) Stream removed, broadcasting: 3 I0519 00:41:26.248235 7 log.go:172] (0xc002beedc0) (0xc00131b680) Stream removed, broadcasting: 5 May 19 00:41:26.248: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 19 00:41:26.248: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.248: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.276221 7 log.go:172] (0xc002356840) (0xc001a9bcc0) Create stream I0519 00:41:26.276261 7 log.go:172] (0xc002356840) (0xc001a9bcc0) Stream added, broadcasting: 1 I0519 00:41:26.278183 7 log.go:172] (0xc002356840) Reply frame received for 1 I0519 00:41:26.278227 7 log.go:172] (0xc002356840) (0xc002c461e0) Create stream I0519 00:41:26.278244 7 log.go:172] (0xc002356840) (0xc002c461e0) Stream added, broadcasting: 3 I0519 00:41:26.278976 7 log.go:172] (0xc002356840) Reply frame received for 3 I0519 00:41:26.279012 7 log.go:172] (0xc002356840) (0xc00131ba40) Create stream I0519 00:41:26.279024 7 log.go:172] (0xc002356840) (0xc00131ba40) Stream added, broadcasting: 5 I0519 00:41:26.279842 7 log.go:172] (0xc002356840) Reply frame received for 5 I0519 00:41:26.341886 7 log.go:172] (0xc002356840) Data frame received for 5 I0519 00:41:26.341919 7 log.go:172] (0xc00131ba40) (5) Data frame handling I0519 00:41:26.341939 7 log.go:172] (0xc002356840) Data frame received for 3 I0519 00:41:26.341952 7 log.go:172] (0xc002c461e0) (3) Data frame handling I0519 00:41:26.341978 7 log.go:172] (0xc002c461e0) (3) Data frame sent I0519 00:41:26.342145 7 log.go:172] (0xc002356840) Data frame received for 3 I0519 00:41:26.342164 7 log.go:172] (0xc002c461e0) (3) Data frame handling I0519 00:41:26.343549 7 log.go:172] (0xc002356840) Data frame received for 1 I0519 00:41:26.343563 7 log.go:172] (0xc001a9bcc0) (1) Data frame handling I0519 00:41:26.343582 7 log.go:172] (0xc001a9bcc0) (1) Data frame sent I0519 00:41:26.343598 7 log.go:172] (0xc002356840) (0xc001a9bcc0) Stream removed, broadcasting: 1 I0519 00:41:26.343611 7 log.go:172] (0xc002356840) Go away received I0519 00:41:26.343750 7 log.go:172] (0xc002356840) (0xc001a9bcc0) Stream removed, broadcasting: 1 I0519 00:41:26.343768 7 log.go:172] (0xc002356840) (0xc002c461e0) Stream removed, broadcasting: 3 I0519 00:41:26.343776 7 log.go:172] (0xc002356840) (0xc00131ba40) Stream removed, broadcasting: 5 May 19 00:41:26.343: INFO: Exec stderr: "" May 19 00:41:26.343: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.343: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.371934 7 log.go:172] (0xc002bef3f0) (0xc002d408c0) Create stream I0519 00:41:26.371954 7 log.go:172] (0xc002bef3f0) (0xc002d408c0) Stream added, broadcasting: 1 I0519 00:41:26.373744 7 log.go:172] (0xc002bef3f0) Reply frame received for 1 I0519 00:41:26.373799 7 log.go:172] (0xc002bef3f0) (0xc00131bcc0) Create stream I0519 00:41:26.373823 7 log.go:172] (0xc002bef3f0) (0xc00131bcc0) Stream added, broadcasting: 3 I0519 00:41:26.374674 7 log.go:172] (0xc002bef3f0) Reply frame received for 3 I0519 00:41:26.374710 7 log.go:172] (0xc002bef3f0) (0xc002c46280) Create stream I0519 00:41:26.374724 7 log.go:172] (0xc002bef3f0) (0xc002c46280) Stream added, broadcasting: 5 I0519 00:41:26.375558 7 log.go:172] (0xc002bef3f0) Reply frame received for 5 I0519 00:41:26.452454 7 log.go:172] (0xc002bef3f0) Data frame received for 5 I0519 00:41:26.452508 7 log.go:172] (0xc002c46280) (5) Data frame handling I0519 00:41:26.452557 7 log.go:172] (0xc002bef3f0) Data frame received for 3 I0519 00:41:26.452580 7 log.go:172] (0xc00131bcc0) (3) Data frame handling I0519 00:41:26.452598 7 log.go:172] (0xc00131bcc0) (3) Data frame sent I0519 00:41:26.452615 7 log.go:172] (0xc002bef3f0) Data frame received for 3 I0519 00:41:26.452642 7 log.go:172] (0xc00131bcc0) (3) Data frame handling I0519 00:41:26.454158 7 log.go:172] (0xc002bef3f0) Data frame received for 1 I0519 00:41:26.454206 7 log.go:172] (0xc002d408c0) (1) Data frame handling I0519 00:41:26.454224 7 log.go:172] (0xc002d408c0) (1) Data frame sent I0519 00:41:26.454255 7 log.go:172] (0xc002bef3f0) (0xc002d408c0) Stream removed, broadcasting: 1 I0519 00:41:26.454305 7 log.go:172] (0xc002bef3f0) Go away received I0519 00:41:26.454418 7 log.go:172] (0xc002bef3f0) (0xc002d408c0) Stream removed, broadcasting: 1 I0519 00:41:26.454446 7 log.go:172] (0xc002bef3f0) (0xc00131bcc0) Stream removed, broadcasting: 3 I0519 00:41:26.454468 7 log.go:172] (0xc002bef3f0) (0xc002c46280) Stream removed, broadcasting: 5 May 19 00:41:26.454: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 19 00:41:26.454: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.454: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.483918 7 log.go:172] (0xc001fc7290) (0xc001fb0000) Create stream I0519 00:41:26.483945 7 log.go:172] (0xc001fc7290) (0xc001fb0000) Stream added, broadcasting: 1 I0519 00:41:26.486504 7 log.go:172] (0xc001fc7290) Reply frame received for 1 I0519 00:41:26.486542 7 log.go:172] (0xc001fc7290) (0xc002c46460) Create stream I0519 00:41:26.486555 7 log.go:172] (0xc001fc7290) (0xc002c46460) Stream added, broadcasting: 3 I0519 00:41:26.487615 7 log.go:172] (0xc001fc7290) Reply frame received for 3 I0519 00:41:26.487675 7 log.go:172] (0xc001fc7290) (0xc0013eeaa0) Create stream I0519 00:41:26.487696 7 log.go:172] (0xc001fc7290) (0xc0013eeaa0) Stream added, broadcasting: 5 I0519 00:41:26.488871 7 log.go:172] (0xc001fc7290) Reply frame received for 5 I0519 00:41:26.558926 7 log.go:172] (0xc001fc7290) Data frame received for 3 I0519 00:41:26.558967 7 log.go:172] (0xc001fc7290) Data frame received for 5 I0519 00:41:26.559003 7 log.go:172] (0xc0013eeaa0) (5) Data frame handling I0519 00:41:26.559034 7 log.go:172] (0xc002c46460) (3) Data frame handling I0519 00:41:26.559053 7 log.go:172] (0xc002c46460) (3) Data frame sent I0519 00:41:26.559066 7 log.go:172] (0xc001fc7290) Data frame received for 3 I0519 00:41:26.559080 7 log.go:172] (0xc002c46460) (3) Data frame handling I0519 00:41:26.560632 7 log.go:172] (0xc001fc7290) Data frame received for 1 I0519 00:41:26.560669 7 log.go:172] (0xc001fb0000) (1) Data frame handling I0519 00:41:26.560696 7 log.go:172] (0xc001fb0000) (1) Data frame sent I0519 00:41:26.560716 7 log.go:172] (0xc001fc7290) (0xc001fb0000) Stream removed, broadcasting: 1 I0519 00:41:26.560731 7 log.go:172] (0xc001fc7290) Go away received I0519 00:41:26.560875 7 log.go:172] (0xc001fc7290) (0xc001fb0000) Stream removed, broadcasting: 1 I0519 00:41:26.560899 7 log.go:172] (0xc001fc7290) (0xc002c46460) Stream removed, broadcasting: 3 I0519 00:41:26.560917 7 log.go:172] (0xc001fc7290) (0xc0013eeaa0) Stream removed, broadcasting: 5 May 19 00:41:26.560: INFO: Exec stderr: "" May 19 00:41:26.560: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.560: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.588494 7 log.go:172] (0xc00618a4d0) (0xc002c46780) Create stream I0519 00:41:26.588523 7 log.go:172] (0xc00618a4d0) (0xc002c46780) Stream added, broadcasting: 1 I0519 00:41:26.591059 7 log.go:172] (0xc00618a4d0) Reply frame received for 1 I0519 00:41:26.591093 7 log.go:172] (0xc00618a4d0) (0xc0013eeb40) Create stream I0519 00:41:26.591102 7 log.go:172] (0xc00618a4d0) (0xc0013eeb40) Stream added, broadcasting: 3 I0519 00:41:26.592059 7 log.go:172] (0xc00618a4d0) Reply frame received for 3 I0519 00:41:26.592108 7 log.go:172] (0xc00618a4d0) (0xc001a9bf40) Create stream I0519 00:41:26.592122 7 log.go:172] (0xc00618a4d0) (0xc001a9bf40) Stream added, broadcasting: 5 I0519 00:41:26.593337 7 log.go:172] (0xc00618a4d0) Reply frame received for 5 I0519 00:41:26.662248 7 log.go:172] (0xc00618a4d0) Data frame received for 3 I0519 00:41:26.662327 7 log.go:172] (0xc0013eeb40) (3) Data frame handling I0519 00:41:26.662362 7 log.go:172] (0xc0013eeb40) (3) Data frame sent I0519 00:41:26.662401 7 log.go:172] (0xc00618a4d0) Data frame received for 3 I0519 00:41:26.662437 7 log.go:172] (0xc0013eeb40) (3) Data frame handling I0519 00:41:26.662478 7 log.go:172] (0xc00618a4d0) Data frame received for 5 I0519 00:41:26.662537 7 log.go:172] (0xc001a9bf40) (5) Data frame handling I0519 00:41:26.664651 7 log.go:172] (0xc00618a4d0) Data frame received for 1 I0519 00:41:26.664681 7 log.go:172] (0xc002c46780) (1) Data frame handling I0519 00:41:26.664697 7 log.go:172] (0xc002c46780) (1) Data frame sent I0519 00:41:26.664719 7 log.go:172] (0xc00618a4d0) (0xc002c46780) Stream removed, broadcasting: 1 I0519 00:41:26.664735 7 log.go:172] (0xc00618a4d0) Go away received I0519 00:41:26.664835 7 log.go:172] (0xc00618a4d0) (0xc002c46780) Stream removed, broadcasting: 1 I0519 00:41:26.664849 7 log.go:172] (0xc00618a4d0) (0xc0013eeb40) Stream removed, broadcasting: 3 I0519 00:41:26.664857 7 log.go:172] (0xc00618a4d0) (0xc001a9bf40) Stream removed, broadcasting: 5 May 19 00:41:26.664: INFO: Exec stderr: "" May 19 00:41:26.664: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.664: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.688279 7 log.go:172] (0xc002c516b0) (0xc0013ef0e0) Create stream I0519 00:41:26.688306 7 log.go:172] (0xc002c516b0) (0xc0013ef0e0) Stream added, broadcasting: 1 I0519 00:41:26.690388 7 log.go:172] (0xc002c516b0) Reply frame received for 1 I0519 00:41:26.690419 7 log.go:172] (0xc002c516b0) (0xc002c468c0) Create stream I0519 00:41:26.690429 7 log.go:172] (0xc002c516b0) (0xc002c468c0) Stream added, broadcasting: 3 I0519 00:41:26.691188 7 log.go:172] (0xc002c516b0) Reply frame received for 3 I0519 00:41:26.691228 7 log.go:172] (0xc002c516b0) (0xc002c46a00) Create stream I0519 00:41:26.691255 7 log.go:172] (0xc002c516b0) (0xc002c46a00) Stream added, broadcasting: 5 I0519 00:41:26.692127 7 log.go:172] (0xc002c516b0) Reply frame received for 5 I0519 00:41:26.759939 7 log.go:172] (0xc002c516b0) Data frame received for 5 I0519 00:41:26.759984 7 log.go:172] (0xc002c46a00) (5) Data frame handling I0519 00:41:26.760016 7 log.go:172] (0xc002c516b0) Data frame received for 3 I0519 00:41:26.760030 7 log.go:172] (0xc002c468c0) (3) Data frame handling I0519 00:41:26.760044 7 log.go:172] (0xc002c468c0) (3) Data frame sent I0519 00:41:26.760093 7 log.go:172] (0xc002c516b0) Data frame received for 3 I0519 00:41:26.760106 7 log.go:172] (0xc002c468c0) (3) Data frame handling I0519 00:41:26.761556 7 log.go:172] (0xc002c516b0) Data frame received for 1 I0519 00:41:26.761577 7 log.go:172] (0xc0013ef0e0) (1) Data frame handling I0519 00:41:26.761606 7 log.go:172] (0xc0013ef0e0) (1) Data frame sent I0519 00:41:26.762170 7 log.go:172] (0xc002c516b0) (0xc0013ef0e0) Stream removed, broadcasting: 1 I0519 00:41:26.762224 7 log.go:172] (0xc002c516b0) Go away received I0519 00:41:26.762259 7 log.go:172] (0xc002c516b0) (0xc0013ef0e0) Stream removed, broadcasting: 1 I0519 00:41:26.762281 7 log.go:172] (0xc002c516b0) (0xc002c468c0) Stream removed, broadcasting: 3 I0519 00:41:26.762290 7 log.go:172] (0xc002c516b0) (0xc002c46a00) Stream removed, broadcasting: 5 May 19 00:41:26.762: INFO: Exec stderr: "" May 19 00:41:26.762: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6970 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:41:26.762: INFO: >>> kubeConfig: /root/.kube/config I0519 00:41:26.792644 7 log.go:172] (0xc002c51ce0) (0xc0013ef720) Create stream I0519 00:41:26.792676 7 log.go:172] (0xc002c51ce0) (0xc0013ef720) Stream added, broadcasting: 1 I0519 00:41:26.794955 7 log.go:172] (0xc002c51ce0) Reply frame received for 1 I0519 00:41:26.795000 7 log.go:172] (0xc002c51ce0) (0xc0013ef860) Create stream I0519 00:41:26.795016 7 log.go:172] (0xc002c51ce0) (0xc0013ef860) Stream added, broadcasting: 3 I0519 00:41:26.795901 7 log.go:172] (0xc002c51ce0) Reply frame received for 3 I0519 00:41:26.795975 7 log.go:172] (0xc002c51ce0) (0xc0013efa40) Create stream I0519 00:41:26.795999 7 log.go:172] (0xc002c51ce0) (0xc0013efa40) Stream added, broadcasting: 5 I0519 00:41:26.797073 7 log.go:172] (0xc002c51ce0) Reply frame received for 5 I0519 00:41:26.857959 7 log.go:172] (0xc002c51ce0) Data frame received for 5 I0519 00:41:26.857984 7 log.go:172] (0xc0013efa40) (5) Data frame handling I0519 00:41:26.858035 7 log.go:172] (0xc002c51ce0) Data frame received for 3 I0519 00:41:26.858089 7 log.go:172] (0xc0013ef860) (3) Data frame handling I0519 00:41:26.858123 7 log.go:172] (0xc0013ef860) (3) Data frame sent I0519 00:41:26.858140 7 log.go:172] (0xc002c51ce0) Data frame received for 3 I0519 00:41:26.858154 7 log.go:172] (0xc0013ef860) (3) Data frame handling I0519 00:41:26.859666 7 log.go:172] (0xc002c51ce0) Data frame received for 1 I0519 00:41:26.859737 7 log.go:172] (0xc0013ef720) (1) Data frame handling I0519 00:41:26.859777 7 log.go:172] (0xc0013ef720) (1) Data frame sent I0519 00:41:26.859796 7 log.go:172] (0xc002c51ce0) (0xc0013ef720) Stream removed, broadcasting: 1 I0519 00:41:26.859841 7 log.go:172] (0xc002c51ce0) Go away received I0519 00:41:26.859947 7 log.go:172] (0xc002c51ce0) (0xc0013ef720) Stream removed, broadcasting: 1 I0519 00:41:26.859975 7 log.go:172] (0xc002c51ce0) (0xc0013ef860) Stream removed, broadcasting: 3 I0519 00:41:26.859992 7 log.go:172] (0xc002c51ce0) (0xc0013efa40) Stream removed, broadcasting: 5 May 19 00:41:26.860: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:41:26.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6970" for this suite. • [SLOW TEST:11.174 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":200,"skipped":3355,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:41:26.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:41:42.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6037" for this suite. • [SLOW TEST:16.127 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":201,"skipped":3377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:41:42.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-f26e57a0-ed09-4804-8fa0-6caff8b946c3 in namespace container-probe-5959 May 19 00:41:47.164: INFO: Started pod busybox-f26e57a0-ed09-4804-8fa0-6caff8b946c3 in namespace container-probe-5959 STEP: checking the pod's current state and verifying that restartCount is present May 19 00:41:47.167: INFO: Initial restart count of pod busybox-f26e57a0-ed09-4804-8fa0-6caff8b946c3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:45:47.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5959" for this suite. • [SLOW TEST:244.878 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3407,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:45:47.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 19 00:45:52.014: INFO: &Pod{ObjectMeta:{send-events-b5fc9ad0-c364-4870-85b0-c5c8ca6616dd events-8229 /api/v1/namespaces/events-8229/pods/send-events-b5fc9ad0-c364-4870-85b0-c5c8ca6616dd 6e9a1b7b-a5f5-4af8-b039-b50bff43d3b6 5826539 0 2020-05-19 00:45:47 +0000 UTC map[name:foo time:946026891] map[] [] [] [{e2e.test Update v1 2020-05-19 00:45:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:45:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.202\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bpfvh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bpfvh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bpfvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:45:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:45:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:45:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:45:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.202,StartTime:2020-05-19 00:45:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:45:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://a81e36c445fb9f048948fcbcc37335b2e7e8675ad570b50b0099bc419a42ea32,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 19 00:45:54.024: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 19 00:45:56.030: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:45:56.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8229" for this suite. • [SLOW TEST:8.247 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":203,"skipped":3420,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:45:56.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:45:56.212: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267" in namespace "security-context-test-8670" to be "Succeeded or Failed" May 19 00:45:56.216: INFO: Pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267": Phase="Pending", Reason="", readiness=false. Elapsed: 3.267192ms May 19 00:45:58.292: INFO: Pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079775055s May 19 00:46:00.426: INFO: Pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214174384s May 19 00:46:00.426: INFO: Pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267" satisfied condition "Succeeded or Failed" May 19 00:46:00.441: INFO: Got logs for pod "busybox-privileged-false-e5a4272a-6dc1-48e2-b111-ea897195e267": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:00.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8670" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":204,"skipped":3422,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:00.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:04.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1687" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3441,"failed":0} ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:04.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 19 00:46:09.839: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:10.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3607" for this suite. • [SLOW TEST:6.156 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":206,"skipped":3441,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:10.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 19 00:46:11.108: INFO: Pod name pod-release: Found 0 pods out of 1 May 19 00:46:16.112: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:17.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8499" for this suite. • [SLOW TEST:6.274 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":207,"skipped":3457,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:17.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 19 00:46:18.081: INFO: Created pod &Pod{ObjectMeta:{dns-7670 dns-7670 /api/v1/namespaces/dns-7670/pods/dns-7670 c3925b9b-861f-4e26-8b91-4e2811b7b9ff 5826754 0 2020-05-19 00:46:18 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-19 00:46:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s9j6s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s9j6s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s9j6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:46:18.098: INFO: The status of Pod dns-7670 is Pending, waiting for it to be Running (with Ready = true) May 19 00:46:20.210: INFO: The status of Pod dns-7670 is Pending, waiting for it to be Running (with Ready = true) May 19 00:46:22.102: INFO: The status of Pod dns-7670 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 19 00:46:22.102: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7670 PodName:dns-7670 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:46:22.102: INFO: >>> kubeConfig: /root/.kube/config I0519 00:46:22.137955 7 log.go:172] (0xc001fc6000) (0xc001fb0960) Create stream I0519 00:46:22.137984 7 log.go:172] (0xc001fc6000) (0xc001fb0960) Stream added, broadcasting: 1 I0519 00:46:22.139814 7 log.go:172] (0xc001fc6000) Reply frame received for 1 I0519 00:46:22.139861 7 log.go:172] (0xc001fc6000) (0xc0013ee960) Create stream I0519 00:46:22.139879 7 log.go:172] (0xc001fc6000) (0xc0013ee960) Stream added, broadcasting: 3 I0519 00:46:22.140975 7 log.go:172] (0xc001fc6000) Reply frame received for 3 I0519 00:46:22.141022 7 log.go:172] (0xc001fc6000) (0xc001812000) Create stream I0519 00:46:22.141073 7 log.go:172] (0xc001fc6000) (0xc001812000) Stream added, broadcasting: 5 I0519 00:46:22.142582 7 log.go:172] (0xc001fc6000) Reply frame received for 5 I0519 00:46:22.235474 7 log.go:172] (0xc001fc6000) Data frame received for 3 I0519 00:46:22.235498 7 log.go:172] (0xc0013ee960) (3) Data frame handling I0519 00:46:22.235512 7 log.go:172] (0xc0013ee960) (3) Data frame sent I0519 00:46:22.237308 7 log.go:172] (0xc001fc6000) Data frame received for 5 I0519 00:46:22.237330 7 log.go:172] (0xc001812000) (5) Data frame handling I0519 00:46:22.237349 7 log.go:172] (0xc001fc6000) Data frame received for 3 I0519 00:46:22.237365 7 log.go:172] (0xc0013ee960) (3) Data frame handling I0519 00:46:22.239363 7 log.go:172] (0xc001fc6000) Data frame received for 1 I0519 00:46:22.239379 7 log.go:172] (0xc001fb0960) (1) Data frame handling I0519 00:46:22.239392 7 log.go:172] (0xc001fb0960) (1) Data frame sent I0519 00:46:22.239411 7 log.go:172] (0xc001fc6000) (0xc001fb0960) Stream removed, broadcasting: 1 I0519 00:46:22.239426 7 log.go:172] (0xc001fc6000) Go away received I0519 00:46:22.239575 7 log.go:172] (0xc001fc6000) (0xc001fb0960) Stream removed, broadcasting: 1 I0519 00:46:22.239601 7 log.go:172] (0xc001fc6000) (0xc0013ee960) Stream removed, broadcasting: 3 I0519 00:46:22.239619 7 log.go:172] (0xc001fc6000) (0xc001812000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 19 00:46:22.239: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7670 PodName:dns-7670 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:46:22.239: INFO: >>> kubeConfig: /root/.kube/config I0519 00:46:22.270150 7 log.go:172] (0xc002c513f0) (0xc0020e7040) Create stream I0519 00:46:22.270176 7 log.go:172] (0xc002c513f0) (0xc0020e7040) Stream added, broadcasting: 1 I0519 00:46:22.271750 7 log.go:172] (0xc002c513f0) Reply frame received for 1 I0519 00:46:22.271780 7 log.go:172] (0xc002c513f0) (0xc001fb0a00) Create stream I0519 00:46:22.271790 7 log.go:172] (0xc002c513f0) (0xc001fb0a00) Stream added, broadcasting: 3 I0519 00:46:22.272405 7 log.go:172] (0xc002c513f0) Reply frame received for 3 I0519 00:46:22.272447 7 log.go:172] (0xc002c513f0) (0xc001fb0aa0) Create stream I0519 00:46:22.272461 7 log.go:172] (0xc002c513f0) (0xc001fb0aa0) Stream added, broadcasting: 5 I0519 00:46:22.273245 7 log.go:172] (0xc002c513f0) Reply frame received for 5 I0519 00:46:22.339174 7 log.go:172] (0xc002c513f0) Data frame received for 3 I0519 00:46:22.339205 7 log.go:172] (0xc001fb0a00) (3) Data frame handling I0519 00:46:22.339235 7 log.go:172] (0xc001fb0a00) (3) Data frame sent I0519 00:46:22.341093 7 log.go:172] (0xc002c513f0) Data frame received for 5 I0519 00:46:22.341300 7 log.go:172] (0xc001fb0aa0) (5) Data frame handling I0519 00:46:22.341638 7 log.go:172] (0xc002c513f0) Data frame received for 3 I0519 00:46:22.341660 7 log.go:172] (0xc001fb0a00) (3) Data frame handling I0519 00:46:22.343121 7 log.go:172] (0xc002c513f0) Data frame received for 1 I0519 00:46:22.343138 7 log.go:172] (0xc0020e7040) (1) Data frame handling I0519 00:46:22.343158 7 log.go:172] (0xc0020e7040) (1) Data frame sent I0519 00:46:22.343268 7 log.go:172] (0xc002c513f0) (0xc0020e7040) Stream removed, broadcasting: 1 I0519 00:46:22.343301 7 log.go:172] (0xc002c513f0) Go away received I0519 00:46:22.343489 7 log.go:172] (0xc002c513f0) (0xc0020e7040) Stream removed, broadcasting: 1 I0519 00:46:22.343520 7 log.go:172] (0xc002c513f0) (0xc001fb0a00) Stream removed, broadcasting: 3 I0519 00:46:22.343535 7 log.go:172] (0xc002c513f0) (0xc001fb0aa0) Stream removed, broadcasting: 5 May 19 00:46:22.343: INFO: Deleting pod dns-7670... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:22.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7670" for this suite. • [SLOW TEST:5.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":208,"skipped":3469,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:22.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 19 00:46:22.820: INFO: Waiting up to 5m0s for pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746" in namespace "downward-api-3898" to be "Succeeded or Failed" May 19 00:46:22.947: INFO: Pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746": Phase="Pending", Reason="", readiness=false. Elapsed: 127.780089ms May 19 00:46:24.951: INFO: Pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131115912s May 19 00:46:26.954: INFO: Pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134810607s May 19 00:46:28.959: INFO: Pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13956858s STEP: Saw pod success May 19 00:46:28.959: INFO: Pod "downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746" satisfied condition "Succeeded or Failed" May 19 00:46:28.962: INFO: Trying to get logs from node latest-worker2 pod downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746 container dapi-container: STEP: delete the pod May 19 00:46:29.008: INFO: Waiting for pod downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746 to disappear May 19 00:46:29.037: INFO: Pod downward-api-2cabb9df-6382-45ed-8ebf-6b8304b8d746 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:29.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3898" for this suite. • [SLOW TEST:6.602 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":209,"skipped":3475,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:29.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5980 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 00:46:29.138: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 00:46:29.337: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 00:46:31.340: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 00:46:33.361: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:35.381: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:37.342: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:39.341: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:41.361: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:43.342: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:45.341: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:47.341: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:49.341: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 00:46:51.342: INFO: The status of Pod netserver-0 is Running (Ready = true) May 19 00:46:51.349: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 19 00:46:55.371: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostname&protocol=http&host=10.244.1.206&port=8080&tries=1'] Namespace:pod-network-test-5980 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:46:55.371: INFO: >>> kubeConfig: /root/.kube/config I0519 00:46:55.402174 7 log.go:172] (0xc001fc6b00) (0xc002d406e0) Create stream I0519 00:46:55.402207 7 log.go:172] (0xc001fc6b00) (0xc002d406e0) Stream added, broadcasting: 1 I0519 00:46:55.404446 7 log.go:172] (0xc001fc6b00) Reply frame received for 1 I0519 00:46:55.404475 7 log.go:172] (0xc001fc6b00) (0xc002b07180) Create stream I0519 00:46:55.404506 7 log.go:172] (0xc001fc6b00) (0xc002b07180) Stream added, broadcasting: 3 I0519 00:46:55.405670 7 log.go:172] (0xc001fc6b00) Reply frame received for 3 I0519 00:46:55.405700 7 log.go:172] (0xc001fc6b00) (0xc002d40780) Create stream I0519 00:46:55.405711 7 log.go:172] (0xc001fc6b00) (0xc002d40780) Stream added, broadcasting: 5 I0519 00:46:55.406527 7 log.go:172] (0xc001fc6b00) Reply frame received for 5 I0519 00:46:55.482714 7 log.go:172] (0xc001fc6b00) Data frame received for 3 I0519 00:46:55.482759 7 log.go:172] (0xc002b07180) (3) Data frame handling I0519 00:46:55.482792 7 log.go:172] (0xc002b07180) (3) Data frame sent I0519 00:46:55.483268 7 log.go:172] (0xc001fc6b00) Data frame received for 5 I0519 00:46:55.483296 7 log.go:172] (0xc002d40780) (5) Data frame handling I0519 00:46:55.483325 7 log.go:172] (0xc001fc6b00) Data frame received for 3 I0519 00:46:55.483396 7 log.go:172] (0xc002b07180) (3) Data frame handling I0519 00:46:55.485538 7 log.go:172] (0xc001fc6b00) Data frame received for 1 I0519 00:46:55.485560 7 log.go:172] (0xc002d406e0) (1) Data frame handling I0519 00:46:55.485572 7 log.go:172] (0xc002d406e0) (1) Data frame sent I0519 00:46:55.485584 7 log.go:172] (0xc001fc6b00) (0xc002d406e0) Stream removed, broadcasting: 1 I0519 00:46:55.485612 7 log.go:172] (0xc001fc6b00) Go away received I0519 00:46:55.485722 7 log.go:172] (0xc001fc6b00) (0xc002d406e0) Stream removed, broadcasting: 1 I0519 00:46:55.485749 7 log.go:172] (0xc001fc6b00) (0xc002b07180) Stream removed, broadcasting: 3 I0519 00:46:55.485784 7 log.go:172] (0xc001fc6b00) (0xc002d40780) Stream removed, broadcasting: 5 May 19 00:46:55.485: INFO: Waiting for responses: map[] May 19 00:46:55.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.195:8080/dial?request=hostname&protocol=http&host=10.244.2.194&port=8080&tries=1'] Namespace:pod-network-test-5980 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 00:46:55.488: INFO: >>> kubeConfig: /root/.kube/config I0519 00:46:55.516868 7 log.go:172] (0xc001fc71e0) (0xc002d40fa0) Create stream I0519 00:46:55.516897 7 log.go:172] (0xc001fc71e0) (0xc002d40fa0) Stream added, broadcasting: 1 I0519 00:46:55.519181 7 log.go:172] (0xc001fc71e0) Reply frame received for 1 I0519 00:46:55.519213 7 log.go:172] (0xc001fc71e0) (0xc0013efc20) Create stream I0519 00:46:55.519225 7 log.go:172] (0xc001fc71e0) (0xc0013efc20) Stream added, broadcasting: 3 I0519 00:46:55.520135 7 log.go:172] (0xc001fc71e0) Reply frame received for 3 I0519 00:46:55.520178 7 log.go:172] (0xc001fc71e0) (0xc0013efcc0) Create stream I0519 00:46:55.520191 7 log.go:172] (0xc001fc71e0) (0xc0013efcc0) Stream added, broadcasting: 5 I0519 00:46:55.521067 7 log.go:172] (0xc001fc71e0) Reply frame received for 5 I0519 00:46:55.595054 7 log.go:172] (0xc001fc71e0) Data frame received for 3 I0519 00:46:55.595101 7 log.go:172] (0xc0013efc20) (3) Data frame handling I0519 00:46:55.595142 7 log.go:172] (0xc0013efc20) (3) Data frame sent I0519 00:46:55.595344 7 log.go:172] (0xc001fc71e0) Data frame received for 5 I0519 00:46:55.595392 7 log.go:172] (0xc001fc71e0) Data frame received for 3 I0519 00:46:55.595431 7 log.go:172] (0xc0013efc20) (3) Data frame handling I0519 00:46:55.595458 7 log.go:172] (0xc0013efcc0) (5) Data frame handling I0519 00:46:55.596811 7 log.go:172] (0xc001fc71e0) Data frame received for 1 I0519 00:46:55.596836 7 log.go:172] (0xc002d40fa0) (1) Data frame handling I0519 00:46:55.596857 7 log.go:172] (0xc002d40fa0) (1) Data frame sent I0519 00:46:55.596871 7 log.go:172] (0xc001fc71e0) (0xc002d40fa0) Stream removed, broadcasting: 1 I0519 00:46:55.596932 7 log.go:172] (0xc001fc71e0) (0xc002d40fa0) Stream removed, broadcasting: 1 I0519 00:46:55.596941 7 log.go:172] (0xc001fc71e0) (0xc0013efc20) Stream removed, broadcasting: 3 I0519 00:46:55.596972 7 log.go:172] (0xc001fc71e0) Go away received I0519 00:46:55.597042 7 log.go:172] (0xc001fc71e0) (0xc0013efcc0) Stream removed, broadcasting: 5 May 19 00:46:55.597: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:55.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5980" for this suite. • [SLOW TEST:26.561 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:55.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 00:46:55.730: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58" in namespace "downward-api-1365" to be "Succeeded or Failed" May 19 00:46:55.750: INFO: Pod "downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58": Phase="Pending", Reason="", readiness=false. Elapsed: 19.857458ms May 19 00:46:57.753: INFO: Pod "downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023439935s May 19 00:46:59.780: INFO: Pod "downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050082301s STEP: Saw pod success May 19 00:46:59.780: INFO: Pod "downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58" satisfied condition "Succeeded or Failed" May 19 00:46:59.783: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58 container client-container: STEP: delete the pod May 19 00:46:59.819: INFO: Waiting for pod downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58 to disappear May 19 00:46:59.828: INFO: Pod downwardapi-volume-01c396f1-a5be-47c0-aba4-fff0d4c3fb58 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:46:59.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1365" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":211,"skipped":3498,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:46:59.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:47:00.336: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:47:02.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:47:04.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446020, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:47:07.395: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 19 00:47:11.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6770 to-be-attached-pod -i -c=container1' May 19 00:47:11.604: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:47:11.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6770" for this suite. STEP: Destroying namespace "webhook-6770-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.909 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":212,"skipped":3511,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:47:11.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 00:47:16.387: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e4c7d431-1096-4e7d-b690-2f2a93d17f3f" May 19 00:47:16.387: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e4c7d431-1096-4e7d-b690-2f2a93d17f3f" in namespace "pods-9387" to be "terminated due to deadline exceeded" May 19 00:47:16.392: INFO: Pod "pod-update-activedeadlineseconds-e4c7d431-1096-4e7d-b690-2f2a93d17f3f": Phase="Running", Reason="", readiness=true. Elapsed: 5.535675ms May 19 00:47:18.397: INFO: Pod "pod-update-activedeadlineseconds-e4c7d431-1096-4e7d-b690-2f2a93d17f3f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010587634s May 19 00:47:18.397: INFO: Pod "pod-update-activedeadlineseconds-e4c7d431-1096-4e7d-b690-2f2a93d17f3f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:47:18.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9387" for this suite. • [SLOW TEST:6.662 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3518,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:47:18.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-7d57156a-b3af-48cb-a1cc-aa92c769214d STEP: Creating a pod to test consume secrets May 19 00:47:18.494: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493" in namespace "projected-7162" to be "Succeeded or Failed" May 19 00:47:18.506: INFO: Pod "pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493": Phase="Pending", Reason="", readiness=false. Elapsed: 12.175489ms May 19 00:47:20.600: INFO: Pod "pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106561158s May 19 00:47:22.604: INFO: Pod "pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109979196s STEP: Saw pod success May 19 00:47:22.604: INFO: Pod "pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493" satisfied condition "Succeeded or Failed" May 19 00:47:22.606: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493 container projected-secret-volume-test: STEP: delete the pod May 19 00:47:22.662: INFO: Waiting for pod pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493 to disappear May 19 00:47:22.674: INFO: Pod pod-projected-secrets-710bc10f-0cf9-4e3d-a9c7-76333e1d8493 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:47:22.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7162" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":214,"skipped":3523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:47:22.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:48:22.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7414" for this suite. • [SLOW TEST:60.167 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3554,"failed":0} SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:48:22.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 19 00:48:27.460: INFO: Successfully updated pod "pod-update-bb04f25b-2ccd-45f1-9de6-1596daeb65ef" STEP: verifying the updated pod is in kubernetes May 19 00:48:27.485: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:48:27.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7417" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":216,"skipped":3557,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:48:27.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-czbs STEP: Creating a pod to test atomic-volume-subpath May 19 00:48:27.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-czbs" in namespace "subpath-9981" to be "Succeeded or Failed" May 19 00:48:27.655: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Pending", Reason="", readiness=false. Elapsed: 49.953173ms May 19 00:48:29.763: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157998888s May 19 00:48:31.768: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 4.162223508s May 19 00:48:33.772: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 6.166691633s May 19 00:48:35.777: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 8.171468337s May 19 00:48:37.780: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 10.174628822s May 19 00:48:39.784: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 12.178836806s May 19 00:48:41.789: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 14.18366216s May 19 00:48:43.794: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 16.188129564s May 19 00:48:45.798: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 18.193016828s May 19 00:48:47.802: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 20.196897954s May 19 00:48:49.808: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Running", Reason="", readiness=true. Elapsed: 22.202317948s May 19 00:48:51.813: INFO: Pod "pod-subpath-test-configmap-czbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.207470775s STEP: Saw pod success May 19 00:48:51.813: INFO: Pod "pod-subpath-test-configmap-czbs" satisfied condition "Succeeded or Failed" May 19 00:48:51.816: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-czbs container test-container-subpath-configmap-czbs: STEP: delete the pod May 19 00:48:52.043: INFO: Waiting for pod pod-subpath-test-configmap-czbs to disappear May 19 00:48:52.054: INFO: Pod pod-subpath-test-configmap-czbs no longer exists STEP: Deleting pod pod-subpath-test-configmap-czbs May 19 00:48:52.054: INFO: Deleting pod "pod-subpath-test-configmap-czbs" in namespace "subpath-9981" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:48:52.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9981" for this suite. • [SLOW TEST:24.547 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":217,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:48:52.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:48:52.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 19 00:48:52.739: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:48:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:48:52Z]] name:name1 resourceVersion:5827582 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af0d9ce6-ccdc-41fa-8f99-3e220589aa5a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 19 00:49:02.747: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:49:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:49:02Z]] name:name2 resourceVersion:5827624 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8f7a2ad-a99a-4a29-9023-0864f70f04d6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 19 00:49:12.754: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:48:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:49:12Z]] name:name1 resourceVersion:5827654 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af0d9ce6-ccdc-41fa-8f99-3e220589aa5a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 19 00:49:22.794: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:49:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:49:22Z]] name:name2 resourceVersion:5827684 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8f7a2ad-a99a-4a29-9023-0864f70f04d6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 19 00:49:32.803: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:48:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:49:12Z]] name:name1 resourceVersion:5827712 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:af0d9ce6-ccdc-41fa-8f99-3e220589aa5a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 19 00:49:42.813: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-19T00:49:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-19T00:49:22Z]] name:name2 resourceVersion:5827742 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c8f7a2ad-a99a-4a29-9023-0864f70f04d6] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:49:53.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-869" for this suite. • [SLOW TEST:61.266 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":218,"skipped":3587,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:49:53.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 19 00:49:53.414: INFO: PodSpec: initContainers in spec.initContainers May 19 00:50:45.176: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e1022918-19bc-4594-aa18-d5df6ae773a7", GenerateName:"", Namespace:"init-container-2820", SelfLink:"/api/v1/namespaces/init-container-2820/pods/pod-init-e1022918-19bc-4594-aa18-d5df6ae773a7", UID:"331e197f-313d-48f0-a6a3-9beaf9a30acc", ResourceVersion:"5827965", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725446193, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"414821357"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001b92100), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b92180)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001b921a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001b92260)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-l82hf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0003fe4c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l82hf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l82hf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-l82hf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002bc6098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b0a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bc6120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002bc6140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002bc6148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002bc614c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446193, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446193, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446193, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446193, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.211", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.211"}}, StartTime:(*v1.Time)(0xc001b922c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b0a0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b0a150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c0e1b41a0b5b02b90b1f298645c8304a9db83e34d7f09c907e81029e85d6b8ee", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b92320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b922e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002bc61cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:50:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2820" for this suite. • [SLOW TEST:51.909 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":219,"skipped":3608,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:50:45.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 00:50:45.306: INFO: Waiting up to 5m0s for pod "pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f" in namespace "emptydir-7225" to be "Succeeded or Failed" May 19 00:50:45.309: INFO: Pod "pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280302ms May 19 00:50:47.313: INFO: Pod "pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00625195s May 19 00:50:49.316: INFO: Pod "pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009905412s STEP: Saw pod success May 19 00:50:49.316: INFO: Pod "pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f" satisfied condition "Succeeded or Failed" May 19 00:50:49.318: INFO: Trying to get logs from node latest-worker2 pod pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f container test-container: STEP: delete the pod May 19 00:50:49.351: INFO: Waiting for pod pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f to disappear May 19 00:50:49.405: INFO: Pod pod-7016bd76-5e2b-4b1f-8547-15b7146c9b6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:50:49.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7225" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3618,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:50:49.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:51:05.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1592" for this suite. • [SLOW TEST:16.130 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":221,"skipped":3628,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:51:05.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 19 00:53:06.189: INFO: Successfully updated pod "var-expansion-a56ae6b8-040d-4578-989f-28027696534c" STEP: waiting for pod running STEP: deleting the pod gracefully May 19 00:53:08.202: INFO: Deleting pod "var-expansion-a56ae6b8-040d-4578-989f-28027696534c" in namespace "var-expansion-5115" May 19 00:53:08.207: INFO: Wait up to 5m0s for pod "var-expansion-a56ae6b8-040d-4578-989f-28027696534c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:53:42.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5115" for this suite. • [SLOW TEST:156.740 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":222,"skipped":3633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:53:42.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 19 00:53:42.356: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828662 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:53:42.357: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828663 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:53:42.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828664 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 19 00:53:52.386: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828704 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:53:52.386: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828705 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:53:52.386: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3415 /api/v1/namespaces/watch-3415/configmaps/e2e-watch-test-label-changed 7a023f11-68c8-46dc-836d-44ebbd9dd806 5828706 0 2020-05-19 00:53:42 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-19 00:53:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:53:52.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3415" for this suite. • [SLOW TEST:10.110 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":223,"skipped":3658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:53:52.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:53:52.504: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 19 00:53:57.507: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 19 00:53:57.507: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 19 00:53:59.511: INFO: Creating deployment "test-rollover-deployment" May 19 00:53:59.592: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 19 00:54:01.598: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 19 00:54:01.605: INFO: Ensure that both replica sets have 1 created replica May 19 00:54:01.611: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 19 00:54:01.618: INFO: Updating deployment test-rollover-deployment May 19 00:54:01.618: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 19 00:54:03.692: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 19 00:54:03.698: INFO: Make sure deployment "test-rollover-deployment" is complete May 19 00:54:03.704: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:03.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446441, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:05.717: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:05.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446445, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:07.713: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:07.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446445, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:09.711: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:09.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446445, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:11.722: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:11.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446445, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:13.711: INFO: all replica sets need to contain the pod-template-hash label May 19 00:54:13.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446445, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:15.762: INFO: May 19 00:54:15.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446455, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446439, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:17.743: INFO: May 19 00:54:17.743: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 00:54:17.752: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4238 /apis/apps/v1/namespaces/deployment-4238/deployments/test-rollover-deployment 8fb05295-7e40-4f41-b672-7b5e7f4a7b65 5828861 2 2020-05-19 00:53:59 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-19 00:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004569008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-19 00:53:59 +0000 UTC,LastTransitionTime:2020-05-19 00:53:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-19 00:54:15 +0000 UTC,LastTransitionTime:2020-05-19 00:53:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 19 00:54:17.756: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-4238 /apis/apps/v1/namespaces/deployment-4238/replicasets/test-rollover-deployment-7c4fd9c879 9aef8116-35e1-410a-8e79-a5c0adc30f25 5828849 2 2020-05-19 00:54:01 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 8fb05295-7e40-4f41-b672-7b5e7f4a7b65 0xc0024b7127 0xc0024b7128}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fb05295-7e40-4f41-b672-7b5e7f4a7b65\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024b7228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 19 00:54:17.756: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 19 00:54:17.756: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4238 /apis/apps/v1/namespaces/deployment-4238/replicasets/test-rollover-controller 367d0f6f-eb9b-43df-9fee-8b11e3f6c6da 5828859 2 2020-05-19 00:53:52 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 8fb05295-7e40-4f41-b672-7b5e7f4a7b65 0xc0024b6c0f 0xc0024b6c20}] [] [{e2e.test Update apps/v1 2020-05-19 00:53:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:54:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fb05295-7e40-4f41-b672-7b5e7f4a7b65\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0024b6e38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:54:17.756: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-4238 /apis/apps/v1/namespaces/deployment-4238/replicasets/test-rollover-deployment-5686c4cfd5 0becc609-abf4-4f16-af5a-e77d21538082 5828799 2 2020-05-19 00:53:59 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 8fb05295-7e40-4f41-b672-7b5e7f4a7b65 0xc0024b6eb7 0xc0024b6eb8}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8fb05295-7e40-4f41-b672-7b5e7f4a7b65\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0024b7038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:54:17.759: INFO: Pod "test-rollover-deployment-7c4fd9c879-q7mhh" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-q7mhh test-rollover-deployment-7c4fd9c879- deployment-4238 /api/v1/namespaces/deployment-4238/pods/test-rollover-deployment-7c4fd9c879-q7mhh fda899d7-9fa2-495d-ab57-53bb928f50f2 5828817 0 2020-05-19 00:54:01 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 9aef8116-35e1-410a-8e79-a5c0adc30f25 0xc0031787d7 0xc0031787d8}] [] [{kube-controller-manager Update v1 2020-05-19 00:54:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9aef8116-35e1-410a-8e79-a5c0adc30f25\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pj5qp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pj5qp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pj5qp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:54:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:54:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.215,StartTime:2020-05-19 00:54:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:54:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://8f4949f8d0af99421d1798ab47fe90af65dbd8e205bb27dd67b9068d8debd12a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:17.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4238" for this suite. • [SLOW TEST:25.371 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":224,"skipped":3710,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:17.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 19 00:54:17.834: INFO: Waiting up to 5m0s for pod "pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a" in namespace "emptydir-1130" to be "Succeeded or Failed" May 19 00:54:17.838: INFO: Pod "pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.642785ms May 19 00:54:19.842: INFO: Pod "pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007968376s May 19 00:54:21.847: INFO: Pod "pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01275633s STEP: Saw pod success May 19 00:54:21.847: INFO: Pod "pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a" satisfied condition "Succeeded or Failed" May 19 00:54:21.851: INFO: Trying to get logs from node latest-worker2 pod pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a container test-container: STEP: delete the pod May 19 00:54:21.960: INFO: Waiting for pod pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a to disappear May 19 00:54:21.966: INFO: Pod pod-243bfe0c-3d18-4e7c-ae6f-d258b9fe295a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:21.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1130" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:21.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:22.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-739" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":226,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:22.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 19 00:54:22.733: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 19 00:54:24.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:54:26.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446462, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:54:29.782: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:54:29.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:31.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-269" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.040 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":227,"skipped":3765,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:31.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:31.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6295" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":228,"skipped":3801,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:31.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-9641/secret-test-f1b426c2-f7b9-47be-aa49-f222d9e303be STEP: Creating a pod to test consume secrets May 19 00:54:31.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641" in namespace "secrets-9641" to be "Succeeded or Failed" May 19 00:54:31.532: INFO: Pod "pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641": Phase="Pending", Reason="", readiness=false. Elapsed: 17.253753ms May 19 00:54:33.536: INFO: Pod "pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021460005s May 19 00:54:35.548: INFO: Pod "pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033454804s STEP: Saw pod success May 19 00:54:35.548: INFO: Pod "pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641" satisfied condition "Succeeded or Failed" May 19 00:54:35.551: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641 container env-test: STEP: delete the pod May 19 00:54:35.629: INFO: Waiting for pod pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641 to disappear May 19 00:54:35.638: INFO: Pod pod-configmaps-4630eacb-7ed0-46f9-9855-c95611c17641 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:35.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9641" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3813,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:35.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:54:46.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3166" for this suite. • [SLOW TEST:11.182 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":230,"skipped":3827,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:54:46.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 19 00:54:46.898: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:55:04.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3522" for this suite. • [SLOW TEST:17.651 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":231,"skipped":3863,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:55:04.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 19 00:55:04.609: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829219 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:04.609: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829219 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 19 00:55:14.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829254 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:14.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829254 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 19 00:55:24.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829284 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:24.627: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829284 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 19 00:55:34.636: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829314 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:34.636: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-a 8ca81c69-200e-401b-8dc7-a0a154ce1d9c 5829314 0 2020-05-19 00:55:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 19 00:55:44.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-b 840c70fc-7c53-4303-ad9a-7cdca0ae404f 5829343 0 2020-05-19 00:55:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:44.671: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-b 840c70fc-7c53-4303-ad9a-7cdca0ae404f 5829343 0 2020-05-19 00:55:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 19 00:55:54.677: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-b 840c70fc-7c53-4303-ad9a-7cdca0ae404f 5829372 0 2020-05-19 00:55:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:55:54.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3996 /api/v1/namespaces/watch-3996/configmaps/e2e-watch-test-configmap-b 840c70fc-7c53-4303-ad9a-7cdca0ae404f 5829372 0 2020-05-19 00:55:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-19 00:55:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:56:04.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3996" for this suite. • [SLOW TEST:60.209 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":232,"skipped":3869,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:56:04.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-319.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-319.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-319.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-319.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-319.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-319.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 00:56:10.959: INFO: DNS probes using dns-319/dns-test-0fe1f144-a118-449b-978a-345cbbd53e2d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:56:11.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-319" for this suite. • [SLOW TEST:6.480 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":233,"skipped":3885,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:56:11.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:56:22.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9928" for this suite. • [SLOW TEST:11.427 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":234,"skipped":3897,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:56:22.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5440 STEP: creating service affinity-clusterip in namespace services-5440 STEP: creating replication controller affinity-clusterip in namespace services-5440 I0519 00:56:22.762065 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5440, replica count: 3 I0519 00:56:25.812541 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:56:28.812836 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:56:28.819: INFO: Creating new exec pod May 19 00:56:33.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5440 execpod-affinity9sw4x -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 19 00:56:36.959: INFO: stderr: "I0519 00:56:36.846807 3622 log.go:172] (0xc00097e840) (0xc00072d860) Create stream\nI0519 00:56:36.846847 3622 log.go:172] (0xc00097e840) (0xc00072d860) Stream added, broadcasting: 1\nI0519 00:56:36.848934 3622 log.go:172] (0xc00097e840) Reply frame received for 1\nI0519 00:56:36.848962 3622 log.go:172] (0xc00097e840) (0xc00071e0a0) Create stream\nI0519 00:56:36.848970 3622 log.go:172] (0xc00097e840) (0xc00071e0a0) Stream added, broadcasting: 3\nI0519 00:56:36.849938 3622 log.go:172] (0xc00097e840) Reply frame received for 3\nI0519 00:56:36.849966 3622 log.go:172] (0xc00097e840) (0xc000702780) Create stream\nI0519 00:56:36.849975 3622 log.go:172] (0xc00097e840) (0xc000702780) Stream added, broadcasting: 5\nI0519 00:56:36.850515 3622 log.go:172] (0xc00097e840) Reply frame received for 5\nI0519 00:56:36.932898 3622 log.go:172] (0xc00097e840) Data frame received for 5\nI0519 00:56:36.932920 3622 log.go:172] (0xc000702780) (5) Data frame handling\nI0519 00:56:36.932935 3622 log.go:172] (0xc000702780) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0519 00:56:36.947640 3622 log.go:172] (0xc00097e840) Data frame received for 5\nI0519 00:56:36.947772 3622 log.go:172] (0xc000702780) (5) Data frame handling\nI0519 00:56:36.947825 3622 log.go:172] (0xc00097e840) Data frame received for 3\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0519 00:56:36.947856 3622 log.go:172] (0xc00071e0a0) (3) Data frame handling\nI0519 00:56:36.947922 3622 log.go:172] (0xc000702780) (5) Data frame sent\nI0519 00:56:36.948411 3622 log.go:172] (0xc00097e840) Data frame received for 5\nI0519 00:56:36.948444 3622 log.go:172] (0xc000702780) (5) Data frame handling\nI0519 00:56:36.954340 3622 log.go:172] (0xc00097e840) Data frame received for 1\nI0519 00:56:36.954421 3622 log.go:172] (0xc00072d860) (1) Data frame handling\nI0519 00:56:36.954443 3622 log.go:172] (0xc00072d860) (1) Data frame sent\nI0519 00:56:36.954464 3622 log.go:172] (0xc00097e840) (0xc00072d860) Stream removed, broadcasting: 1\nI0519 00:56:36.954490 3622 log.go:172] (0xc00097e840) Go away received\nI0519 00:56:36.954848 3622 log.go:172] (0xc00097e840) (0xc00072d860) Stream removed, broadcasting: 1\nI0519 00:56:36.954867 3622 log.go:172] (0xc00097e840) (0xc00071e0a0) Stream removed, broadcasting: 3\nI0519 00:56:36.954875 3622 log.go:172] (0xc00097e840) (0xc000702780) Stream removed, broadcasting: 5\n" May 19 00:56:36.959: INFO: stdout: "" May 19 00:56:36.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5440 execpod-affinity9sw4x -- /bin/sh -x -c nc -zv -t -w 2 10.103.46.126 80' May 19 00:56:37.189: INFO: stderr: "I0519 00:56:37.093378 3650 log.go:172] (0xc00077bb80) (0xc0002e8280) Create stream\nI0519 00:56:37.093446 3650 log.go:172] (0xc00077bb80) (0xc0002e8280) Stream added, broadcasting: 1\nI0519 00:56:37.096252 3650 log.go:172] (0xc00077bb80) Reply frame received for 1\nI0519 00:56:37.096291 3650 log.go:172] (0xc00077bb80) (0xc00028ee60) Create stream\nI0519 00:56:37.096305 3650 log.go:172] (0xc00077bb80) (0xc00028ee60) Stream added, broadcasting: 3\nI0519 00:56:37.097046 3650 log.go:172] (0xc00077bb80) Reply frame received for 3\nI0519 00:56:37.097066 3650 log.go:172] (0xc00077bb80) (0xc0002e8a00) Create stream\nI0519 00:56:37.097078 3650 log.go:172] (0xc00077bb80) (0xc0002e8a00) Stream added, broadcasting: 5\nI0519 00:56:37.098208 3650 log.go:172] (0xc00077bb80) Reply frame received for 5\nI0519 00:56:37.181855 3650 log.go:172] (0xc00077bb80) Data frame received for 3\nI0519 00:56:37.182009 3650 log.go:172] (0xc00028ee60) (3) Data frame handling\nI0519 00:56:37.182058 3650 log.go:172] (0xc00077bb80) Data frame received for 5\nI0519 00:56:37.182078 3650 log.go:172] (0xc0002e8a00) (5) Data frame handling\nI0519 00:56:37.182099 3650 log.go:172] (0xc0002e8a00) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.46.126 80\nConnection to 10.103.46.126 80 port [tcp/http] succeeded!\nI0519 00:56:37.182251 3650 log.go:172] (0xc00077bb80) Data frame received for 5\nI0519 00:56:37.182291 3650 log.go:172] (0xc0002e8a00) (5) Data frame handling\nI0519 00:56:37.184109 3650 log.go:172] (0xc00077bb80) Data frame received for 1\nI0519 00:56:37.184135 3650 log.go:172] (0xc0002e8280) (1) Data frame handling\nI0519 00:56:37.184150 3650 log.go:172] (0xc0002e8280) (1) Data frame sent\nI0519 00:56:37.184165 3650 log.go:172] (0xc00077bb80) (0xc0002e8280) Stream removed, broadcasting: 1\nI0519 00:56:37.184182 3650 log.go:172] (0xc00077bb80) Go away received\nI0519 00:56:37.184732 3650 log.go:172] (0xc00077bb80) (0xc0002e8280) Stream removed, broadcasting: 1\nI0519 00:56:37.184832 3650 log.go:172] (0xc00077bb80) (0xc00028ee60) Stream removed, broadcasting: 3\nI0519 00:56:37.184863 3650 log.go:172] (0xc00077bb80) (0xc0002e8a00) Stream removed, broadcasting: 5\n" May 19 00:56:37.189: INFO: stdout: "" May 19 00:56:37.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5440 execpod-affinity9sw4x -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.46.126:80/ ; done' May 19 00:56:37.607: INFO: stderr: "I0519 00:56:37.318419 3672 log.go:172] (0xc000598fd0) (0xc000a08780) Create stream\nI0519 00:56:37.318493 3672 log.go:172] (0xc000598fd0) (0xc000a08780) Stream added, broadcasting: 1\nI0519 00:56:37.325801 3672 log.go:172] (0xc000598fd0) Reply frame received for 1\nI0519 00:56:37.325851 3672 log.go:172] (0xc000598fd0) (0xc0006eeb40) Create stream\nI0519 00:56:37.325907 3672 log.go:172] (0xc000598fd0) (0xc0006eeb40) Stream added, broadcasting: 3\nI0519 00:56:37.327276 3672 log.go:172] (0xc000598fd0) Reply frame received for 3\nI0519 00:56:37.327300 3672 log.go:172] (0xc000598fd0) (0xc0005de3c0) Create stream\nI0519 00:56:37.327307 3672 log.go:172] (0xc000598fd0) (0xc0005de3c0) Stream added, broadcasting: 5\nI0519 00:56:37.328167 3672 log.go:172] (0xc000598fd0) Reply frame received for 5\nI0519 00:56:37.514676 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.514712 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.514739 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.514771 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.514782 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.514797 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.518546 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.518577 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.518598 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.518779 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.518810 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.518850 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.518872 3672 log.go:172] (0xc000598fd0) Data frame received for 5\n+ echo\nI0519 00:56:37.518885 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.518926 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.518950 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.518973 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.518999 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.527219 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.527234 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.527243 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.528128 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.528155 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.528168 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.528185 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.528195 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.528215 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.532951 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.532967 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.532976 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.533748 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.533772 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.533786 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.533798 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.533806 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.533815 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.537499 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.537522 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.537540 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.538369 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.538388 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.538403 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.538490 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.538502 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.538514 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.542621 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.542646 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.542659 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.542863 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.542882 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.542890 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.542905 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.542915 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.542926 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -sI0519 00:56:37.542935 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.542970 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.542989 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.548552 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.548574 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.548604 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.548886 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.548899 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.548914 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.548975 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.548992 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.549007 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.554871 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.554895 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.554910 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.555255 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.555291 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.555309 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.555331 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.555348 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.555361 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.560741 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.560771 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.560790 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.561468 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.561482 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.561496 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.561527 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.561543 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.561553 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.565612 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.565637 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.565663 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.566386 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.566403 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.566411 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.566421 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.566427 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.566435 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.571804 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.571827 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.571853 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.572130 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.572141 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.572162 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.572190 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.572208 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.572231 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.575691 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.575706 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.575716 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.576140 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.576158 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.576174 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.576185 3672 log.go:172] (0xc000598fd0) Data frame received for 5\n+ echo\n+ curl -q -sI0519 00:56:37.576193 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\n --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.576203 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.576221 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.576233 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.576249 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.579746 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.579778 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.579809 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.580093 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.580124 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.580144 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0519 00:56:37.580171 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.580201 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.580220 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.580242 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.580254 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.580272 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n 2 http://10.103.46.126:80/\nI0519 00:56:37.583956 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.583973 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.583998 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.584287 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.584301 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.584324 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.584336 3672 log.go:172] (0xc000598fd0) Data frame received for 5\n+ echo\n+ curl -qI0519 00:56:37.584344 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.584375 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.584394 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.584410 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.584429 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.591450 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.591483 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.591507 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.592658 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.592687 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.592731 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.592750 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.592767 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.592778 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.596149 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.596278 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.596362 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.596641 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.596688 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.596715 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.596761 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.596793 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.596814 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.596824 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.596857 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.46.126:80/\nI0519 00:56:37.596886 3672 log.go:172] (0xc0005de3c0) (5) Data frame sent\nI0519 00:56:37.599855 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.599874 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.599887 3672 log.go:172] (0xc0006eeb40) (3) Data frame sent\nI0519 00:56:37.600492 3672 log.go:172] (0xc000598fd0) Data frame received for 3\nI0519 00:56:37.600582 3672 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0519 00:56:37.600621 3672 log.go:172] (0xc000598fd0) Data frame received for 5\nI0519 00:56:37.600639 3672 log.go:172] (0xc0005de3c0) (5) Data frame handling\nI0519 00:56:37.602519 3672 log.go:172] (0xc000598fd0) Data frame received for 1\nI0519 00:56:37.602575 3672 log.go:172] (0xc000a08780) (1) Data frame handling\nI0519 00:56:37.602615 3672 log.go:172] (0xc000a08780) (1) Data frame sent\nI0519 00:56:37.602641 3672 log.go:172] (0xc000598fd0) (0xc000a08780) Stream removed, broadcasting: 1\nI0519 00:56:37.602664 3672 log.go:172] (0xc000598fd0) Go away received\nI0519 00:56:37.603219 3672 log.go:172] (0xc000598fd0) (0xc000a08780) Stream removed, broadcasting: 1\nI0519 00:56:37.603257 3672 log.go:172] (0xc000598fd0) (0xc0006eeb40) Stream removed, broadcasting: 3\nI0519 00:56:37.603274 3672 log.go:172] (0xc000598fd0) (0xc0005de3c0) Stream removed, broadcasting: 5\n" May 19 00:56:37.607: INFO: stdout: "\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f\naffinity-clusterip-ljk9f" May 19 00:56:37.607: INFO: Received response from host: May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Received response from host: affinity-clusterip-ljk9f May 19 00:56:37.607: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5440, will wait for the garbage collector to delete the pods May 19 00:56:37.717: INFO: Deleting ReplicationController affinity-clusterip took: 19.625997ms May 19 00:56:38.217: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.209697ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:56:43.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5440" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.214 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":235,"skipped":3903,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:56:43.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:56:43.888: INFO: Creating deployment "webserver-deployment" May 19 00:56:43.897: INFO: Waiting for observed generation 1 May 19 00:56:45.906: INFO: Waiting for all required pods to come up May 19 00:56:45.911: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 19 00:56:57.920: INFO: Waiting for deployment "webserver-deployment" to complete May 19 00:56:57.925: INFO: Updating deployment "webserver-deployment" with a non-existent image May 19 00:56:57.930: INFO: Updating deployment webserver-deployment May 19 00:56:57.930: INFO: Waiting for observed generation 2 May 19 00:56:59.936: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 19 00:56:59.939: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 19 00:56:59.940: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 19 00:56:59.947: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 19 00:56:59.947: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 19 00:56:59.949: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 19 00:56:59.954: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 19 00:56:59.954: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 19 00:56:59.960: INFO: Updating deployment webserver-deployment May 19 00:56:59.960: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 19 00:57:00.305: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 19 00:57:03.079: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 00:57:03.555: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-884 /apis/apps/v1/namespaces/deployment-884/deployments/webserver-deployment 5f56bd57-fa54-43eb-a2d7-926335e2c4a4 5830008 3 2020-05-19 00:56:43 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-19 00:56:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003379178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-19 00:57:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-19 00:57:01 +0000 UTC,LastTransitionTime:2020-05-19 00:56:43 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 19 00:57:03.668: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-884 /apis/apps/v1/namespaces/deployment-884/replicasets/webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 5830005 3 2020-05-19 00:56:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5f56bd57-fa54-43eb-a2d7-926335e2c4a4 0xc003379797 0xc003379798}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f56bd57-fa54-43eb-a2d7-926335e2c4a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003379838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:57:03.668: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 19 00:57:03.668: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-884 /apis/apps/v1/namespaces/deployment-884/replicasets/webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 5829989 3 2020-05-19 00:56:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5f56bd57-fa54-43eb-a2d7-926335e2c4a4 0xc0033798a7 0xc0033798a8}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f56bd57-fa54-43eb-a2d7-926335e2c4a4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003379918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 19 00:57:03.744: INFO: Pod "webserver-deployment-6676bcd6d4-477p4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-477p4 webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-477p4 5b439e42-f004-4946-a448-b60f2bc1df05 5830051 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003428c57 0xc003428c58}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.744: INFO: Pod "webserver-deployment-6676bcd6d4-76wxr" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-76wxr webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-76wxr d8645390-6cce-4d35-a1c1-c745f3203804 5830057 0 2020-05-19 00:56:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003428e37 0xc003428e38}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.224\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.224,StartTime:2020-05-19 00:56:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.745: INFO: Pod "webserver-deployment-6676bcd6d4-cl6vz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-cl6vz webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-cl6vz e2f7698b-903c-4ac3-947f-a5cd78ae35e7 5830037 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429027 0xc003429028}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.745: INFO: Pod "webserver-deployment-6676bcd6d4-h98j5" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h98j5 webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-h98j5 41643ae7-256d-43c4-b077-40b917ee09cb 5829920 0 2020-05-19 00:56:58 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429227 0xc003429228}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:56:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.745: INFO: Pod "webserver-deployment-6676bcd6d4-jqzr2" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-jqzr2 webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-jqzr2 29a23eac-b80a-48c0-a4f7-7d3c664d82b8 5829915 0 2020-05-19 00:56:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429437 0xc003429438}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:56:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.746: INFO: Pod "webserver-deployment-6676bcd6d4-lnx7v" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lnx7v webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-lnx7v 92955afb-3122-4757-b8fc-07f7b634e7bd 5830054 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429657 0xc003429658}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.746: INFO: Pod "webserver-deployment-6676bcd6d4-nx6dd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nx6dd webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-nx6dd 044a7d6b-6f48-4338-8ea9-3c740ab042bc 5830009 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429837 0xc003429838}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.746: INFO: Pod "webserver-deployment-6676bcd6d4-q9977" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-q9977 webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-q9977 20fbeb0c-d6ba-4b45-ab7b-248840a4a623 5829921 0 2020-05-19 00:56:58 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc0034299e7 0xc0034299e8}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:56:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.746: INFO: Pod "webserver-deployment-6676bcd6d4-rmcfj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rmcfj webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-rmcfj b4947b5b-c728-442b-a276-5966d0e53542 5829893 0 2020-05-19 00:56:57 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429c17 0xc003429c18}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:56:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.747: INFO: Pod "webserver-deployment-6676bcd6d4-thcwv" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-thcwv webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-thcwv 5d95c835-663f-4285-9840-7f5468469a0d 5830022 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc003429e87 0xc003429e88}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.747: INFO: Pod "webserver-deployment-6676bcd6d4-z6drm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-z6drm webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-z6drm a4538405-eb73-4e31-91a6-e44579cea91f 5830060 0 2020-05-19 00:57:01 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc00331e0b7 0xc00331e0b8}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.747: INFO: Pod "webserver-deployment-6676bcd6d4-zgk5r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zgk5r webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-zgk5r 0de0185c-d8f8-4bb3-8d8c-ce9273d4d086 5830012 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc00331e267 0xc00331e268}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.747: INFO: Pod "webserver-deployment-6676bcd6d4-zj55q" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zj55q webserver-deployment-6676bcd6d4- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-6676bcd6d4-zj55q 3ec21c25-bd5e-4924-a4eb-bacf23a14513 5830048 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7f7b9e23-387f-44f1-aa9a-340d011ba922 0xc00331e417 0xc00331e418}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f7b9e23-387f-44f1-aa9a-340d011ba922\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.748: INFO: Pod "webserver-deployment-84855cf797-2b54g" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2b54g webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-2b54g ec36af20-27e7-423c-94f2-4d706a61da91 5829859 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331e5c7 0xc00331e5c8}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.222,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6523ac3d582e14d767fd8107c4f1133a5463e9ba07dcda882ab4ed8ad2bd771a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.748: INFO: Pod "webserver-deployment-84855cf797-2g4lx" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-2g4lx webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-2g4lx ce10459f-1c04-4add-b827-a7e868d9f253 5830023 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331e7b7 0xc00331e7b8}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.748: INFO: Pod "webserver-deployment-84855cf797-69fpv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-69fpv webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-69fpv f3f6b861-24f6-4808-b9f8-cc1f6263086d 5830007 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331e987 0xc00331e988}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.749: INFO: Pod "webserver-deployment-84855cf797-95kbq" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-95kbq webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-95kbq fbe5c78f-018b-4a0a-bf39-449861b4f4b1 5829773 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331eb77 0xc00331eb78}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.212\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.212,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://344e4bd081e2ff480597331dc4a3a0d096399d46728852b111f5665834b32f32,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.749: INFO: Pod "webserver-deployment-84855cf797-9rdx5" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9rdx5 webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-9rdx5 6a0492bd-376e-4065-a74f-6cc23e7ff6ad 5830056 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331ed97 0xc00331ed98}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.749: INFO: Pod "webserver-deployment-84855cf797-btxz7" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-btxz7 webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-btxz7 9b656317-8aa9-4efa-900b-e144e1e1fc38 5829795 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331ef77 0xc00331ef78}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.214,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebf6825f5c72fca41760fc024f0de9b5ac5d28f1867791bc7069a1dcf4a3f02a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.749: INFO: Pod "webserver-deployment-84855cf797-csh8h" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-csh8h webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-csh8h 495351a9-6d96-42bd-af63-728638802a01 5829808 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331f137 0xc00331f138}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.215,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c64248647a884a34b77d0875184a1f17e0fab6b3e5fed1844c272c7f1ae962d7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.750: INFO: Pod "webserver-deployment-84855cf797-db96d" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-db96d webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-db96d 4539d57c-a09e-4766-a6e6-ff6fc93e251a 5830053 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331f2e7 0xc00331f2e8}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.750: INFO: Pod "webserver-deployment-84855cf797-fcsgm" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fcsgm webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-fcsgm 6160f3ab-21de-474e-9c15-26e25939402e 5829815 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331f557 0xc00331f558}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.219\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.219,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://54bf1f6b03353b91955b0ae068115efc2a01e14a9dd69667de7359ac12780b99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.750: INFO: Pod "webserver-deployment-84855cf797-hbdrp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hbdrp webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-hbdrp 7f8284c8-133c-4ab2-9f4a-7d9b3ca814f0 5830028 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331f797 0xc00331f798}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.751: INFO: Pod "webserver-deployment-84855cf797-j887c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-j887c webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-j887c c1b786bb-b174-48b4-82ed-f45b377acd2a 5830029 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331f9d7 0xc00331f9d8}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.751: INFO: Pod "webserver-deployment-84855cf797-lgkkj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lgkkj webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-lgkkj fd713a92-cfd6-43e9-85f4-2ec601a99a83 5829990 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331fbe7 0xc00331fbe8}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.751: INFO: Pod "webserver-deployment-84855cf797-ptl77" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ptl77 webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-ptl77 f2d63f18-ab50-4eb3-a408-484c7c19c019 5829855 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331fdb7 0xc00331fdb8}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.223\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.223,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5ec537eeaa02e2739bbd7ee612bb561b28deac23243c342d5cfe61afcda88791,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.751: INFO: Pod "webserver-deployment-84855cf797-qjvwp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qjvwp webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-qjvwp 664d3b4b-4cbc-48d1-9c3f-6ab4bbe56a3a 5830034 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc00331ff67 0xc00331ff68}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.752: INFO: Pod "webserver-deployment-84855cf797-r6kz4" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r6kz4 webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-r6kz4 628a625e-dfc4-4acb-a102-18cc630a66a7 5829807 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e01e7 0xc0032e01e8}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.220\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.220,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a88d1ff6834572fa5994f32320b56d4705065ca6071fadc18792e26309b3105,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.752: INFO: Pod "webserver-deployment-84855cf797-rq9mr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rq9mr webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-rq9mr 1f09d35e-2cf5-4033-8f20-66aa0795fe6e 5830016 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e0477 0xc0032e0478}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.752: INFO: Pod "webserver-deployment-84855cf797-rs9kd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rs9kd webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-rs9kd 05268b62-0f94-4343-801e-e4b0ac6ba872 5830011 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e0687 0xc0032e0688}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.752: INFO: Pod "webserver-deployment-84855cf797-snjwr" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-snjwr webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-snjwr 5d5ec31f-4d31-4d0f-98fd-002613424cd6 5829851 0 2020-05-19 00:56:44 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e0867 0xc0032e0868}] [] [{kube-controller-manager Update v1 2020-05-19 00:56:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:56:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:56:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.221,StartTime:2020-05-19 00:56:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-19 00:56:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a287b1fc7e6c2e30176db855c6806879d8f4db0d23997bbd1d68a4c3eefcc024,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.752: INFO: Pod "webserver-deployment-84855cf797-wx8qk" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wx8qk webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-wx8qk 55ee4af8-f245-4833-99fe-fb5ae15e3291 5830040 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e0a77 0xc0032e0a78}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 19 00:57:03.753: INFO: Pod "webserver-deployment-84855cf797-xkwv4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xkwv4 webserver-deployment-84855cf797- deployment-884 /api/v1/namespaces/deployment-884/pods/webserver-deployment-84855cf797-xkwv4 9810db77-cdc6-4230-8b91-03f9abcb8b6f 5830017 0 2020-05-19 00:57:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 e31f5a17-902a-43a1-9b15-09338396915a 0xc0032e0c47 0xc0032e0c48}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e31f5a17-902a-43a1-9b15-09338396915a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6mjgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6mjgh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6mjgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:57:03.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-884" for this suite. • [SLOW TEST:20.179 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":236,"skipped":3905,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:57:03.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:57:04.492: INFO: Creating deployment "test-recreate-deployment" May 19 00:57:04.506: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 19 00:57:04.625: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 19 00:57:07.052: INFO: Waiting deployment "test-recreate-deployment" to complete May 19 00:57:07.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:09.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:12.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:13.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:15.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:17.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:19.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446625, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446624, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 00:57:21.445: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 19 00:57:21.671: INFO: Updating deployment test-recreate-deployment May 19 00:57:21.671: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 19 00:57:22.679: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1370 /apis/apps/v1/namespaces/deployment-1370/deployments/test-recreate-deployment 78e36a8c-4cb4-4571-b111-5a03c810dc8d 5830273 2 2020-05-19 00:57:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-19 00:57:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-19 00:57:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b04698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-19 00:57:22 +0000 UTC,LastTransitionTime:2020-05-19 00:57:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-19 00:57:22 +0000 UTC,LastTransitionTime:2020-05-19 00:57:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 19 00:57:22.699: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-1370 /apis/apps/v1/namespaces/deployment-1370/replicasets/test-recreate-deployment-d5667d9c7 75972a3d-b4a2-4bc8-9f75-f898f8318023 5830271 1 2020-05-19 00:57:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 78e36a8c-4cb4-4571-b111-5a03c810dc8d 0xc002b04ba0 0xc002b04ba1}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:57:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78e36a8c-4cb4-4571-b111-5a03c810dc8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b04c18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:57:22.699: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 19 00:57:22.700: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-1370 /apis/apps/v1/namespaces/deployment-1370/replicasets/test-recreate-deployment-6d65b9f6d8 cb55a084-247f-47ba-84de-6d2b11bb0a86 5830261 2 2020-05-19 00:57:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 78e36a8c-4cb4-4571-b111-5a03c810dc8d 0xc002b04aa7 0xc002b04aa8}] [] [{kube-controller-manager Update apps/v1 2020-05-19 00:57:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"78e36a8c-4cb4-4571-b111-5a03c810dc8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b04b38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 00:57:22.742: INFO: Pod "test-recreate-deployment-d5667d9c7-z7zvc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-z7zvc test-recreate-deployment-d5667d9c7- deployment-1370 /api/v1/namespaces/deployment-1370/pods/test-recreate-deployment-d5667d9c7-z7zvc 73023345-fefd-4d4f-b398-d92f355cd66e 5830276 0 2020-05-19 00:57:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 75972a3d-b4a2-4bc8-9f75-f898f8318023 0xc003448770 0xc003448771}] [] [{kube-controller-manager Update v1 2020-05-19 00:57:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75972a3d-b4a2-4bc8-9f75-f898f8318023\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-19 00:57:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-26zkq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-26zkq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-26zkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-19 00:57:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-19 00:57:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:57:22.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1370" for this suite. • [SLOW TEST:19.182 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":237,"skipped":3911,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:57:23.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 19 00:57:23.366: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 00:57:23.911: INFO: Waiting for terminating namespaces to be deleted... May 19 00:57:23.983: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 19 00:57:24.038: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 19 00:57:24.038: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 19 00:57:24.038: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 19 00:57:24.038: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 19 00:57:24.038: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 00:57:24.038: INFO: Container kindnet-cni ready: true, restart count 0 May 19 00:57:24.038: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 00:57:24.038: INFO: Container kube-proxy ready: true, restart count 0 May 19 00:57:24.038: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 19 00:57:24.048: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 19 00:57:24.048: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 19 00:57:24.048: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 19 00:57:24.048: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 19 00:57:24.048: INFO: test-recreate-deployment-d5667d9c7-z7zvc from deployment-1370 started at 2020-05-19 00:57:22 +0000 UTC (1 container statuses recorded) May 19 00:57:24.048: INFO: Container httpd ready: false, restart count 0 May 19 00:57:24.048: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 00:57:24.048: INFO: Container kindnet-cni ready: true, restart count 0 May 19 00:57:24.048: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 00:57:24.048: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.161048028985f9d0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1610480294460d21], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:57:25.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3016" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":238,"skipped":3916,"failed":0} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:57:25.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-78909049-250a-4e75-be9a-306d123ab26f in namespace container-probe-4468 May 19 00:57:34.439: INFO: Started pod busybox-78909049-250a-4e75-be9a-306d123ab26f in namespace container-probe-4468 STEP: checking the pod's current state and verifying that restartCount is present May 19 00:57:34.442: INFO: Initial restart count of pod busybox-78909049-250a-4e75-be9a-306d123ab26f is 0 May 19 00:58:26.779: INFO: Restart count of pod container-probe-4468/busybox-78909049-250a-4e75-be9a-306d123ab26f is now 1 (52.33660881s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:58:26.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4468" for this suite. • [SLOW TEST:61.556 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3919,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:58:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-3813c165-1b70-4a60-bff6-5ea6904cc3bd STEP: Creating a pod to test consume secrets May 19 00:58:26.907: INFO: Waiting up to 5m0s for pod "pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9" in namespace "secrets-5455" to be "Succeeded or Failed" May 19 00:58:26.923: INFO: Pod "pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.654238ms May 19 00:58:28.927: INFO: Pod "pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020085932s May 19 00:58:30.931: INFO: Pod "pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024207189s STEP: Saw pod success May 19 00:58:30.931: INFO: Pod "pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9" satisfied condition "Succeeded or Failed" May 19 00:58:30.934: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9 container secret-volume-test: STEP: delete the pod May 19 00:58:31.144: INFO: Waiting for pod pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9 to disappear May 19 00:58:31.209: INFO: Pod pod-secrets-444136f2-30cc-4628-9087-9c94e95a1dc9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:58:31.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5455" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3920,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:58:31.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9046 STEP: creating service affinity-nodeport-transition in namespace services-9046 STEP: creating replication controller affinity-nodeport-transition in namespace services-9046 I0519 00:58:31.395721 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9046, replica count: 3 I0519 00:58:34.446120 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:58:37.446356 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:58:37.457: INFO: Creating new exec pod May 19 00:58:42.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 19 00:58:42.711: INFO: stderr: "I0519 00:58:42.622903 3692 log.go:172] (0xc000935340) (0xc000aa66e0) Create stream\nI0519 00:58:42.622960 3692 log.go:172] (0xc000935340) (0xc000aa66e0) Stream added, broadcasting: 1\nI0519 00:58:42.627799 3692 log.go:172] (0xc000935340) Reply frame received for 1\nI0519 00:58:42.627843 3692 log.go:172] (0xc000935340) (0xc000528320) Create stream\nI0519 00:58:42.627854 3692 log.go:172] (0xc000935340) (0xc000528320) Stream added, broadcasting: 3\nI0519 00:58:42.628605 3692 log.go:172] (0xc000935340) Reply frame received for 3\nI0519 00:58:42.628632 3692 log.go:172] (0xc000935340) (0xc00045ae60) Create stream\nI0519 00:58:42.628643 3692 log.go:172] (0xc000935340) (0xc00045ae60) Stream added, broadcasting: 5\nI0519 00:58:42.629538 3692 log.go:172] (0xc000935340) Reply frame received for 5\nI0519 00:58:42.703170 3692 log.go:172] (0xc000935340) Data frame received for 5\nI0519 00:58:42.703195 3692 log.go:172] (0xc00045ae60) (5) Data frame handling\nI0519 00:58:42.703229 3692 log.go:172] (0xc00045ae60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0519 00:58:42.703964 3692 log.go:172] (0xc000935340) Data frame received for 5\nI0519 00:58:42.703986 3692 log.go:172] (0xc00045ae60) (5) Data frame handling\nI0519 00:58:42.704003 3692 log.go:172] (0xc00045ae60) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0519 00:58:42.704259 3692 log.go:172] (0xc000935340) Data frame received for 3\nI0519 00:58:42.704280 3692 log.go:172] (0xc000528320) (3) Data frame handling\nI0519 00:58:42.704301 3692 log.go:172] (0xc000935340) Data frame received for 5\nI0519 00:58:42.704321 3692 log.go:172] (0xc00045ae60) (5) Data frame handling\nI0519 00:58:42.706207 3692 log.go:172] (0xc000935340) Data frame received for 1\nI0519 00:58:42.706232 3692 log.go:172] (0xc000aa66e0) (1) Data frame handling\nI0519 00:58:42.706257 3692 log.go:172] (0xc000aa66e0) (1) Data frame sent\nI0519 00:58:42.706273 3692 log.go:172] (0xc000935340) (0xc000aa66e0) Stream removed, broadcasting: 1\nI0519 00:58:42.706370 3692 log.go:172] (0xc000935340) Go away received\nI0519 00:58:42.706764 3692 log.go:172] (0xc000935340) (0xc000aa66e0) Stream removed, broadcasting: 1\nI0519 00:58:42.706787 3692 log.go:172] (0xc000935340) (0xc000528320) Stream removed, broadcasting: 3\nI0519 00:58:42.706800 3692 log.go:172] (0xc000935340) (0xc00045ae60) Stream removed, broadcasting: 5\n" May 19 00:58:42.711: INFO: stdout: "" May 19 00:58:42.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c nc -zv -t -w 2 10.110.242.118 80' May 19 00:58:42.921: INFO: stderr: "I0519 00:58:42.849855 3712 log.go:172] (0xc000be9080) (0xc000823d60) Create stream\nI0519 00:58:42.849912 3712 log.go:172] (0xc000be9080) (0xc000823d60) Stream added, broadcasting: 1\nI0519 00:58:42.854645 3712 log.go:172] (0xc000be9080) Reply frame received for 1\nI0519 00:58:42.854684 3712 log.go:172] (0xc000be9080) (0xc000812fa0) Create stream\nI0519 00:58:42.854695 3712 log.go:172] (0xc000be9080) (0xc000812fa0) Stream added, broadcasting: 3\nI0519 00:58:42.855547 3712 log.go:172] (0xc000be9080) Reply frame received for 3\nI0519 00:58:42.855576 3712 log.go:172] (0xc000be9080) (0xc0007986e0) Create stream\nI0519 00:58:42.855585 3712 log.go:172] (0xc000be9080) (0xc0007986e0) Stream added, broadcasting: 5\nI0519 00:58:42.856354 3712 log.go:172] (0xc000be9080) Reply frame received for 5\nI0519 00:58:42.912543 3712 log.go:172] (0xc000be9080) Data frame received for 5\nI0519 00:58:42.912581 3712 log.go:172] (0xc000be9080) Data frame received for 3\nI0519 00:58:42.912613 3712 log.go:172] (0xc000812fa0) (3) Data frame handling\nI0519 00:58:42.912640 3712 log.go:172] (0xc0007986e0) (5) Data frame handling\nI0519 00:58:42.912665 3712 log.go:172] (0xc0007986e0) (5) Data frame sent\nI0519 00:58:42.912686 3712 log.go:172] (0xc000be9080) Data frame received for 5\nI0519 00:58:42.912705 3712 log.go:172] (0xc0007986e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.242.118 80\nConnection to 10.110.242.118 80 port [tcp/http] succeeded!\nI0519 00:58:42.914538 3712 log.go:172] (0xc000be9080) Data frame received for 1\nI0519 00:58:42.914561 3712 log.go:172] (0xc000823d60) (1) Data frame handling\nI0519 00:58:42.914593 3712 log.go:172] (0xc000823d60) (1) Data frame sent\nI0519 00:58:42.914611 3712 log.go:172] (0xc000be9080) (0xc000823d60) Stream removed, broadcasting: 1\nI0519 00:58:42.914625 3712 log.go:172] (0xc000be9080) Go away received\nI0519 00:58:42.915126 3712 log.go:172] (0xc000be9080) (0xc000823d60) Stream removed, broadcasting: 1\nI0519 00:58:42.915163 3712 log.go:172] (0xc000be9080) (0xc000812fa0) Stream removed, broadcasting: 3\nI0519 00:58:42.915206 3712 log.go:172] (0xc000be9080) (0xc0007986e0) Stream removed, broadcasting: 5\n" May 19 00:58:42.921: INFO: stdout: "" May 19 00:58:42.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30145' May 19 00:58:43.165: INFO: stderr: "I0519 00:58:43.086441 3734 log.go:172] (0xc000a1d080) (0xc0009ac0a0) Create stream\nI0519 00:58:43.086498 3734 log.go:172] (0xc000a1d080) (0xc0009ac0a0) Stream added, broadcasting: 1\nI0519 00:58:43.092211 3734 log.go:172] (0xc000a1d080) Reply frame received for 1\nI0519 00:58:43.092283 3734 log.go:172] (0xc000a1d080) (0xc00086a000) Create stream\nI0519 00:58:43.092305 3734 log.go:172] (0xc000a1d080) (0xc00086a000) Stream added, broadcasting: 3\nI0519 00:58:43.093756 3734 log.go:172] (0xc000a1d080) Reply frame received for 3\nI0519 00:58:43.093804 3734 log.go:172] (0xc000a1d080) (0xc00082e640) Create stream\nI0519 00:58:43.093822 3734 log.go:172] (0xc000a1d080) (0xc00082e640) Stream added, broadcasting: 5\nI0519 00:58:43.095106 3734 log.go:172] (0xc000a1d080) Reply frame received for 5\nI0519 00:58:43.156055 3734 log.go:172] (0xc000a1d080) Data frame received for 5\nI0519 00:58:43.156111 3734 log.go:172] (0xc00082e640) (5) Data frame handling\nI0519 00:58:43.156132 3734 log.go:172] (0xc00082e640) (5) Data frame sent\nI0519 00:58:43.156145 3734 log.go:172] (0xc000a1d080) Data frame received for 5\nI0519 00:58:43.156158 3734 log.go:172] (0xc00082e640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30145\nConnection to 172.17.0.13 30145 port [tcp/30145] succeeded!\nI0519 00:58:43.156211 3734 log.go:172] (0xc000a1d080) Data frame received for 3\nI0519 00:58:43.156247 3734 log.go:172] (0xc00086a000) (3) Data frame handling\nI0519 00:58:43.157848 3734 log.go:172] (0xc000a1d080) Data frame received for 1\nI0519 00:58:43.157886 3734 log.go:172] (0xc0009ac0a0) (1) Data frame handling\nI0519 00:58:43.157907 3734 log.go:172] (0xc0009ac0a0) (1) Data frame sent\nI0519 00:58:43.157929 3734 log.go:172] (0xc000a1d080) (0xc0009ac0a0) Stream removed, broadcasting: 1\nI0519 00:58:43.157965 3734 log.go:172] (0xc000a1d080) Go away received\nI0519 00:58:43.158411 3734 log.go:172] (0xc000a1d080) (0xc0009ac0a0) Stream removed, broadcasting: 1\nI0519 00:58:43.158444 3734 log.go:172] (0xc000a1d080) (0xc00086a000) Stream removed, broadcasting: 3\nI0519 00:58:43.158459 3734 log.go:172] (0xc000a1d080) (0xc00082e640) Stream removed, broadcasting: 5\n" May 19 00:58:43.165: INFO: stdout: "" May 19 00:58:43.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30145' May 19 00:58:43.382: INFO: stderr: "I0519 00:58:43.308783 3754 log.go:172] (0xc000aa0d10) (0xc0002fd900) Create stream\nI0519 00:58:43.308839 3754 log.go:172] (0xc000aa0d10) (0xc0002fd900) Stream added, broadcasting: 1\nI0519 00:58:43.311838 3754 log.go:172] (0xc000aa0d10) Reply frame received for 1\nI0519 00:58:43.311907 3754 log.go:172] (0xc000aa0d10) (0xc0004d4780) Create stream\nI0519 00:58:43.311929 3754 log.go:172] (0xc000aa0d10) (0xc0004d4780) Stream added, broadcasting: 3\nI0519 00:58:43.312985 3754 log.go:172] (0xc000aa0d10) Reply frame received for 3\nI0519 00:58:43.313031 3754 log.go:172] (0xc000aa0d10) (0xc0002fdea0) Create stream\nI0519 00:58:43.313048 3754 log.go:172] (0xc000aa0d10) (0xc0002fdea0) Stream added, broadcasting: 5\nI0519 00:58:43.314399 3754 log.go:172] (0xc000aa0d10) Reply frame received for 5\nI0519 00:58:43.375194 3754 log.go:172] (0xc000aa0d10) Data frame received for 3\nI0519 00:58:43.375249 3754 log.go:172] (0xc0004d4780) (3) Data frame handling\nI0519 00:58:43.375276 3754 log.go:172] (0xc000aa0d10) Data frame received for 5\nI0519 00:58:43.375306 3754 log.go:172] (0xc0002fdea0) (5) Data frame handling\nI0519 00:58:43.375331 3754 log.go:172] (0xc0002fdea0) (5) Data frame sent\nI0519 00:58:43.375341 3754 log.go:172] (0xc000aa0d10) Data frame received for 5\nI0519 00:58:43.375349 3754 log.go:172] (0xc0002fdea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30145\nConnection to 172.17.0.12 30145 port [tcp/30145] succeeded!\nI0519 00:58:43.376585 3754 log.go:172] (0xc000aa0d10) Data frame received for 1\nI0519 00:58:43.376619 3754 log.go:172] (0xc0002fd900) (1) Data frame handling\nI0519 00:58:43.376646 3754 log.go:172] (0xc0002fd900) (1) Data frame sent\nI0519 00:58:43.376669 3754 log.go:172] (0xc000aa0d10) (0xc0002fd900) Stream removed, broadcasting: 1\nI0519 00:58:43.376759 3754 log.go:172] (0xc000aa0d10) Go away received\nI0519 00:58:43.377023 3754 log.go:172] (0xc000aa0d10) (0xc0002fd900) Stream removed, broadcasting: 1\nI0519 00:58:43.377053 3754 log.go:172] (0xc000aa0d10) (0xc0004d4780) Stream removed, broadcasting: 3\nI0519 00:58:43.377062 3754 log.go:172] (0xc000aa0d10) (0xc0002fdea0) Stream removed, broadcasting: 5\n" May 19 00:58:43.382: INFO: stdout: "" May 19 00:58:43.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30145/ ; done' May 19 00:58:43.723: INFO: stderr: "I0519 00:58:43.541292 3774 log.go:172] (0xc000ab5c30) (0xc000abe280) Create stream\nI0519 00:58:43.541377 3774 log.go:172] (0xc000ab5c30) (0xc000abe280) Stream added, broadcasting: 1\nI0519 00:58:43.545819 3774 log.go:172] (0xc000ab5c30) Reply frame received for 1\nI0519 00:58:43.545863 3774 log.go:172] (0xc000ab5c30) (0xc00051fd60) Create stream\nI0519 00:58:43.545885 3774 log.go:172] (0xc000ab5c30) (0xc00051fd60) Stream added, broadcasting: 3\nI0519 00:58:43.546791 3774 log.go:172] (0xc000ab5c30) Reply frame received for 3\nI0519 00:58:43.546832 3774 log.go:172] (0xc000ab5c30) (0xc00023bae0) Create stream\nI0519 00:58:43.546866 3774 log.go:172] (0xc000ab5c30) (0xc00023bae0) Stream added, broadcasting: 5\nI0519 00:58:43.547795 3774 log.go:172] (0xc000ab5c30) Reply frame received for 5\nI0519 00:58:43.618334 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.618368 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.618376 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.618405 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.618428 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.618448 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.624069 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.624095 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.624138 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.625031 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.625047 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.625061 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.625089 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.625102 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.625261 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.630422 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.630464 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.630502 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.630841 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.630867 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.630877 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.630898 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.630916 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.630927 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.638691 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.638712 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.638741 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.639329 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.639355 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.639374 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.639411 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.639431 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.639457 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.644269 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.644288 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.644301 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.645037 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.645052 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.645072 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.645084 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.645096 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.645103 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.651558 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.651579 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.651605 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.652038 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.652057 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.652070 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.652085 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.652095 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.652105 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.658988 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.659013 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.659038 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.659456 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.659481 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.659499 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.659518 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.659558 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.659585 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\n+ echo\nI0519 00:58:43.659621 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.659646 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.659666 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.664088 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.664178 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.664216 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.664622 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.664644 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.664655 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.664671 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.664681 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.664691 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.671107 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.671125 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.671141 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.671773 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.671819 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.671859 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.671888 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.671908 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.671956 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.671974 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.671984 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.672007 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.678889 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.678907 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.678921 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.679418 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.679445 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.679456 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.679487 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.679508 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.679529 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.683909 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.683980 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.684012 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.684444 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.684473 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.684494 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.684515 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.684531 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.684544 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.690275 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.690313 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.690343 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.693891 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.693996 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.694024 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.694046 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.694055 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.694065 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.695301 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.695324 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.695341 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.695777 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.695803 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.695819 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.695838 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.695853 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.695880 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.695944 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.695962 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.696006 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.700698 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.700727 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.700760 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.701551 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.701568 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.701592 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.701600 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.701609 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.701627 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.706659 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.706671 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.706684 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.707192 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.707223 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.707240 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.707261 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.707276 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.707296 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.707308 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.707317 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.707356 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\nI0519 00:58:43.711492 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.711509 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.711523 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.711884 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.711898 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.711906 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.711916 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.711922 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.711935 3774 log.go:172] (0xc00023bae0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.715837 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.715853 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.715869 3774 log.go:172] (0xc00051fd60) (3) Data frame sent\nI0519 00:58:43.716446 3774 log.go:172] (0xc000ab5c30) Data frame received for 5\nI0519 00:58:43.716464 3774 log.go:172] (0xc00023bae0) (5) Data frame handling\nI0519 00:58:43.716636 3774 log.go:172] (0xc000ab5c30) Data frame received for 3\nI0519 00:58:43.716658 3774 log.go:172] (0xc00051fd60) (3) Data frame handling\nI0519 00:58:43.718571 3774 log.go:172] (0xc000ab5c30) Data frame received for 1\nI0519 00:58:43.718589 3774 log.go:172] (0xc000abe280) (1) Data frame handling\nI0519 00:58:43.718597 3774 log.go:172] (0xc000abe280) (1) Data frame sent\nI0519 00:58:43.718617 3774 log.go:172] (0xc000ab5c30) (0xc000abe280) Stream removed, broadcasting: 1\nI0519 00:58:43.718698 3774 log.go:172] (0xc000ab5c30) Go away received\nI0519 00:58:43.719026 3774 log.go:172] (0xc000ab5c30) (0xc000abe280) Stream removed, broadcasting: 1\nI0519 00:58:43.719040 3774 log.go:172] (0xc000ab5c30) (0xc00051fd60) Stream removed, broadcasting: 3\nI0519 00:58:43.719047 3774 log.go:172] (0xc000ab5c30) (0xc00023bae0) Stream removed, broadcasting: 5\n" May 19 00:58:43.723: INFO: stdout: "\naffinity-nodeport-transition-t8kq9\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-gj79q\naffinity-nodeport-transition-t8kq9\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-gj79q\naffinity-nodeport-transition-gj79q\naffinity-nodeport-transition-gj79q\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-t8kq9\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-t8kq9" May 19 00:58:43.723: INFO: Received response from host: May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-t8kq9 May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-gj79q May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-t8kq9 May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-gj79q May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-gj79q May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-gj79q May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-t8kq9 May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.723: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.724: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:43.724: INFO: Received response from host: affinity-nodeport-transition-t8kq9 May 19 00:58:43.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9046 execpod-affinitycph82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30145/ ; done' May 19 00:58:44.017: INFO: stderr: "I0519 00:58:43.864448 3798 log.go:172] (0xc0009934a0) (0xc00081f540) Create stream\nI0519 00:58:43.864493 3798 log.go:172] (0xc0009934a0) (0xc00081f540) Stream added, broadcasting: 1\nI0519 00:58:43.867557 3798 log.go:172] (0xc0009934a0) Reply frame received for 1\nI0519 00:58:43.867584 3798 log.go:172] (0xc0009934a0) (0xc0006b25a0) Create stream\nI0519 00:58:43.867593 3798 log.go:172] (0xc0009934a0) (0xc0006b25a0) Stream added, broadcasting: 3\nI0519 00:58:43.868235 3798 log.go:172] (0xc0009934a0) Reply frame received for 3\nI0519 00:58:43.868267 3798 log.go:172] (0xc0009934a0) (0xc0005f4280) Create stream\nI0519 00:58:43.868281 3798 log.go:172] (0xc0009934a0) (0xc0005f4280) Stream added, broadcasting: 5\nI0519 00:58:43.869040 3798 log.go:172] (0xc0009934a0) Reply frame received for 5\nI0519 00:58:43.928442 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.928475 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.928508 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.928522 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.928542 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.928555 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.931612 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.931630 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.931647 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.932375 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.932402 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.932424 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.932441 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.932455 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.932469 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.938010 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.938041 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.938072 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.938601 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.938650 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.938682 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.938712 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.938734 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.938772 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.938792 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.938812 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.938845 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.942054 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.942076 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.942088 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.942558 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.942594 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.942617 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.942644 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.942681 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.942707 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.942722 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.942731 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.942753 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.947791 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.947819 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.947840 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.948418 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.948443 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.948456 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.948472 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.948481 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.948491 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.952710 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.952739 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.952761 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.953592 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.953616 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.953637 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.953661 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.953677 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.953693 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.957864 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.957893 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.957920 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.958197 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.958219 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.958238 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.958250 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.958262 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.958275 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.964174 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.964206 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.964219 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.964889 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.964910 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.964926 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.965089 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.965277 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.965312 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.968159 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.968188 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.968209 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.968563 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.968584 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.968592 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.968611 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.968624 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.968639 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.972310 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.972321 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.972327 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.972692 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.972710 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.972731 3798 log.go:172] (0xc0009934a0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.972765 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.972793 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.972813 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.979252 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.979266 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.979274 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.979778 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.979802 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.979830 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.979854 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.979869 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.979890 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.983783 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.983808 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.983826 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.984149 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.984185 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.984211 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.984247 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.984281 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.984323 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.990189 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.990207 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.990239 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.991027 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.991043 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.991064 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.991072 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:43.991082 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.991091 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.995545 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.995564 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.995574 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.996117 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:43.996152 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:43.996181 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:43.996216 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:43.996232 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.996253 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:43.996267 3798 log.go:172] (0xc0009934a0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI0519 00:58:43.996278 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:43.996293 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n 2 http://172.17.0.13:30145/\nI0519 00:58:44.000821 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.000841 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.000857 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:44.001469 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.001485 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.001494 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:44.001506 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:44.001513 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:44.001522 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:44.001532 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:44.001539 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:44.001554 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0519 00:58:44.005451 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.005471 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.005487 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:44.005977 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:44.005995 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:44.006011 3798 log.go:172] (0xc0005f4280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30145/\nI0519 00:58:44.006021 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.006028 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.006037 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:44.010568 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.010583 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.010593 3798 log.go:172] (0xc0006b25a0) (3) Data frame sent\nI0519 00:58:44.011024 3798 log.go:172] (0xc0009934a0) Data frame received for 3\nI0519 00:58:44.011058 3798 log.go:172] (0xc0006b25a0) (3) Data frame handling\nI0519 00:58:44.011135 3798 log.go:172] (0xc0009934a0) Data frame received for 5\nI0519 00:58:44.011148 3798 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0519 00:58:44.013502 3798 log.go:172] (0xc0009934a0) Data frame received for 1\nI0519 00:58:44.013518 3798 log.go:172] (0xc00081f540) (1) Data frame handling\nI0519 00:58:44.013525 3798 log.go:172] (0xc00081f540) (1) Data frame sent\nI0519 00:58:44.013542 3798 log.go:172] (0xc0009934a0) (0xc00081f540) Stream removed, broadcasting: 1\nI0519 00:58:44.013577 3798 log.go:172] (0xc0009934a0) Go away received\nI0519 00:58:44.013844 3798 log.go:172] (0xc0009934a0) (0xc00081f540) Stream removed, broadcasting: 1\nI0519 00:58:44.013865 3798 log.go:172] (0xc0009934a0) (0xc0006b25a0) Stream removed, broadcasting: 3\nI0519 00:58:44.013878 3798 log.go:172] (0xc0009934a0) (0xc0005f4280) Stream removed, broadcasting: 5\n" May 19 00:58:44.018: INFO: stdout: "\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd\naffinity-nodeport-transition-thdmd" May 19 00:58:44.018: INFO: Received response from host: May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Received response from host: affinity-nodeport-transition-thdmd May 19 00:58:44.018: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9046, will wait for the garbage collector to delete the pods May 19 00:58:44.544: INFO: Deleting ReplicationController affinity-nodeport-transition took: 355.12082ms May 19 00:58:45.044: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.230397ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:58:50.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9046" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.041 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":241,"skipped":3937,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:58:50.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 00:58:50.756: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 00:58:52.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446730, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446730, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446730, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725446730, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 00:58:55.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:58:55.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-162" for this suite. STEP: Destroying namespace "webhook-162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.891 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":242,"skipped":3941,"failed":0} [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:58:56.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2169 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-2169 I0519 00:58:56.988204 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2169, replica count: 2 I0519 00:59:00.038601 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 00:59:03.038838 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 00:59:03.038: INFO: Creating new exec pod May 19 00:59:08.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpod9dhv7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 19 00:59:08.306: INFO: stderr: "I0519 00:59:08.215896 3819 log.go:172] (0xc0009fcfd0) (0xc000693f40) Create stream\nI0519 00:59:08.215955 3819 log.go:172] (0xc0009fcfd0) (0xc000693f40) Stream added, broadcasting: 1\nI0519 00:59:08.218849 3819 log.go:172] (0xc0009fcfd0) Reply frame received for 1\nI0519 00:59:08.218926 3819 log.go:172] (0xc0009fcfd0) (0xc000732f00) Create stream\nI0519 00:59:08.218952 3819 log.go:172] (0xc0009fcfd0) (0xc000732f00) Stream added, broadcasting: 3\nI0519 00:59:08.220248 3819 log.go:172] (0xc0009fcfd0) Reply frame received for 3\nI0519 00:59:08.220288 3819 log.go:172] (0xc0009fcfd0) (0xc000b9c0a0) Create stream\nI0519 00:59:08.220301 3819 log.go:172] (0xc0009fcfd0) (0xc000b9c0a0) Stream added, broadcasting: 5\nI0519 00:59:08.221586 3819 log.go:172] (0xc0009fcfd0) Reply frame received for 5\nI0519 00:59:08.277349 3819 log.go:172] (0xc0009fcfd0) Data frame received for 5\nI0519 00:59:08.277386 3819 log.go:172] (0xc000b9c0a0) (5) Data frame handling\nI0519 00:59:08.277403 3819 log.go:172] (0xc000b9c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0519 00:59:08.298078 3819 log.go:172] (0xc0009fcfd0) Data frame received for 5\nI0519 00:59:08.298125 3819 log.go:172] (0xc000b9c0a0) (5) Data frame handling\nI0519 00:59:08.298149 3819 log.go:172] (0xc000b9c0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0519 00:59:08.298453 3819 log.go:172] (0xc0009fcfd0) Data frame received for 5\nI0519 00:59:08.298494 3819 log.go:172] (0xc000b9c0a0) (5) Data frame handling\nI0519 00:59:08.298565 3819 log.go:172] (0xc0009fcfd0) Data frame received for 3\nI0519 00:59:08.298595 3819 log.go:172] (0xc000732f00) (3) Data frame handling\nI0519 00:59:08.300560 3819 log.go:172] (0xc0009fcfd0) Data frame received for 1\nI0519 00:59:08.300581 3819 log.go:172] (0xc000693f40) (1) Data frame handling\nI0519 00:59:08.300594 3819 log.go:172] (0xc000693f40) (1) Data frame sent\nI0519 00:59:08.300605 3819 log.go:172] (0xc0009fcfd0) (0xc000693f40) Stream removed, broadcasting: 1\nI0519 00:59:08.300673 3819 log.go:172] (0xc0009fcfd0) Go away received\nI0519 00:59:08.301029 3819 log.go:172] (0xc0009fcfd0) (0xc000693f40) Stream removed, broadcasting: 1\nI0519 00:59:08.301056 3819 log.go:172] (0xc0009fcfd0) (0xc000732f00) Stream removed, broadcasting: 3\nI0519 00:59:08.301073 3819 log.go:172] (0xc0009fcfd0) (0xc000b9c0a0) Stream removed, broadcasting: 5\n" May 19 00:59:08.306: INFO: stdout: "" May 19 00:59:08.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2169 execpod9dhv7 -- /bin/sh -x -c nc -zv -t -w 2 10.104.74.251 80' May 19 00:59:08.484: INFO: stderr: "I0519 00:59:08.425315 3839 log.go:172] (0xc000510d10) (0xc000a88500) Create stream\nI0519 00:59:08.425375 3839 log.go:172] (0xc000510d10) (0xc000a88500) Stream added, broadcasting: 1\nI0519 00:59:08.430044 3839 log.go:172] (0xc000510d10) Reply frame received for 1\nI0519 00:59:08.430069 3839 log.go:172] (0xc000510d10) (0xc000544460) Create stream\nI0519 00:59:08.430075 3839 log.go:172] (0xc000510d10) (0xc000544460) Stream added, broadcasting: 3\nI0519 00:59:08.431232 3839 log.go:172] (0xc000510d10) Reply frame received for 3\nI0519 00:59:08.431277 3839 log.go:172] (0xc000510d10) (0xc000514000) Create stream\nI0519 00:59:08.431300 3839 log.go:172] (0xc000510d10) (0xc000514000) Stream added, broadcasting: 5\nI0519 00:59:08.432285 3839 log.go:172] (0xc000510d10) Reply frame received for 5\nI0519 00:59:08.477584 3839 log.go:172] (0xc000510d10) Data frame received for 3\nI0519 00:59:08.477631 3839 log.go:172] (0xc000544460) (3) Data frame handling\nI0519 00:59:08.477658 3839 log.go:172] (0xc000510d10) Data frame received for 5\nI0519 00:59:08.477674 3839 log.go:172] (0xc000514000) (5) Data frame handling\nI0519 00:59:08.477696 3839 log.go:172] (0xc000514000) (5) Data frame sent\nI0519 00:59:08.477725 3839 log.go:172] (0xc000510d10) Data frame received for 5\nI0519 00:59:08.477743 3839 log.go:172] (0xc000514000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.74.251 80\nConnection to 10.104.74.251 80 port [tcp/http] succeeded!\nI0519 00:59:08.479066 3839 log.go:172] (0xc000510d10) Data frame received for 1\nI0519 00:59:08.479082 3839 log.go:172] (0xc000a88500) (1) Data frame handling\nI0519 00:59:08.479091 3839 log.go:172] (0xc000a88500) (1) Data frame sent\nI0519 00:59:08.479104 3839 log.go:172] (0xc000510d10) (0xc000a88500) Stream removed, broadcasting: 1\nI0519 00:59:08.479201 3839 log.go:172] (0xc000510d10) Go away received\nI0519 00:59:08.479437 3839 log.go:172] (0xc000510d10) (0xc000a88500) Stream removed, broadcasting: 1\nI0519 00:59:08.479454 3839 log.go:172] (0xc000510d10) (0xc000544460) Stream removed, broadcasting: 3\nI0519 00:59:08.479467 3839 log.go:172] (0xc000510d10) (0xc000514000) Stream removed, broadcasting: 5\n" May 19 00:59:08.484: INFO: stdout: "" May 19 00:59:08.484: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:59:08.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2169" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.423 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":243,"skipped":3941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:59:08.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 00:59:08.725: INFO: Create a RollingUpdate DaemonSet May 19 00:59:08.730: INFO: Check that daemon pods launch on every node of the cluster May 19 00:59:08.739: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:08.766: INFO: Number of nodes with available pods: 0 May 19 00:59:08.766: INFO: Node latest-worker is running more than one daemon pod May 19 00:59:09.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:09.775: INFO: Number of nodes with available pods: 0 May 19 00:59:09.775: INFO: Node latest-worker is running more than one daemon pod May 19 00:59:10.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:10.774: INFO: Number of nodes with available pods: 0 May 19 00:59:10.774: INFO: Node latest-worker is running more than one daemon pod May 19 00:59:11.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:11.934: INFO: Number of nodes with available pods: 0 May 19 00:59:11.934: INFO: Node latest-worker is running more than one daemon pod May 19 00:59:12.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:12.776: INFO: Number of nodes with available pods: 0 May 19 00:59:12.776: INFO: Node latest-worker is running more than one daemon pod May 19 00:59:13.772: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:14.069: INFO: Number of nodes with available pods: 2 May 19 00:59:14.069: INFO: Number of running nodes: 2, number of available pods: 2 May 19 00:59:14.069: INFO: Update the DaemonSet to trigger a rollout May 19 00:59:14.076: INFO: Updating DaemonSet daemon-set May 19 00:59:26.183: INFO: Roll back the DaemonSet before rollout is complete May 19 00:59:26.189: INFO: Updating DaemonSet daemon-set May 19 00:59:26.190: INFO: Make sure DaemonSet rollback is complete May 19 00:59:26.225: INFO: Wrong image for pod: daemon-set-hmp7t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 19 00:59:26.225: INFO: Pod daemon-set-hmp7t is not available May 19 00:59:26.308: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:27.314: INFO: Wrong image for pod: daemon-set-hmp7t. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 19 00:59:27.314: INFO: Pod daemon-set-hmp7t is not available May 19 00:59:27.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 19 00:59:28.314: INFO: Pod daemon-set-f4slx is not available May 19 00:59:28.319: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1036, will wait for the garbage collector to delete the pods May 19 00:59:28.384: INFO: Deleting DaemonSet.extensions daemon-set took: 6.277836ms May 19 00:59:28.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.202599ms May 19 00:59:32.188: INFO: Number of nodes with available pods: 0 May 19 00:59:32.188: INFO: Number of running nodes: 0, number of available pods: 0 May 19 00:59:32.191: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1036/daemonsets","resourceVersion":"5831180"},"items":null} May 19 00:59:32.194: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1036/pods","resourceVersion":"5831180"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:59:32.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1036" for this suite. • [SLOW TEST:23.624 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":244,"skipped":4022,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:59:32.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 19 00:59:32.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7233 /api/v1/namespaces/watch-7233/configmaps/e2e-watch-test-resource-version cacbcab8-0fba-4bb7-b9e0-8ac999349de3 5831188 0 2020-05-19 00:59:32 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-19 00:59:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 19 00:59:32.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7233 /api/v1/namespaces/watch-7233/configmaps/e2e-watch-test-resource-version cacbcab8-0fba-4bb7-b9e0-8ac999349de3 5831189 0 2020-05-19 00:59:32 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-19 00:59:32 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:59:32.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7233" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":245,"skipped":4029,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:59:32.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:59:49.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3701" for this suite. • [SLOW TEST:17.179 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":246,"skipped":4031,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:59:49.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 19 00:59:49.597: INFO: Waiting up to 5m0s for pod "pod-0f5cd420-2efd-4412-b539-027f871b620d" in namespace "emptydir-2308" to be "Succeeded or Failed" May 19 00:59:49.610: INFO: Pod "pod-0f5cd420-2efd-4412-b539-027f871b620d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.748687ms May 19 00:59:51.732: INFO: Pod "pod-0f5cd420-2efd-4412-b539-027f871b620d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135015228s May 19 00:59:53.737: INFO: Pod "pod-0f5cd420-2efd-4412-b539-027f871b620d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139628665s STEP: Saw pod success May 19 00:59:53.737: INFO: Pod "pod-0f5cd420-2efd-4412-b539-027f871b620d" satisfied condition "Succeeded or Failed" May 19 00:59:53.741: INFO: Trying to get logs from node latest-worker2 pod pod-0f5cd420-2efd-4412-b539-027f871b620d container test-container: STEP: delete the pod May 19 00:59:53.825: INFO: Waiting for pod pod-0f5cd420-2efd-4412-b539-027f871b620d to disappear May 19 00:59:53.832: INFO: Pod pod-0f5cd420-2efd-4412-b539-027f871b620d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 00:59:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2308" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4051,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 00:59:53.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-sx5c STEP: Creating a pod to test atomic-volume-subpath May 19 00:59:53.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sx5c" in namespace "subpath-8167" to be "Succeeded or Failed" May 19 00:59:53.982: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.42351ms May 19 00:59:56.003: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034580382s May 19 00:59:58.008: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 4.039156112s May 19 01:00:00.012: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 6.043248938s May 19 01:00:02.016: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 8.047370668s May 19 01:00:04.026: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 10.057497141s May 19 01:00:06.030: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 12.061417334s May 19 01:00:08.034: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 14.065045432s May 19 01:00:10.043: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 16.074043479s May 19 01:00:12.056: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 18.087183873s May 19 01:00:14.060: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 20.091380066s May 19 01:00:16.064: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Running", Reason="", readiness=true. Elapsed: 22.094985087s May 19 01:00:18.068: INFO: Pod "pod-subpath-test-configmap-sx5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.099413148s STEP: Saw pod success May 19 01:00:18.068: INFO: Pod "pod-subpath-test-configmap-sx5c" satisfied condition "Succeeded or Failed" May 19 01:00:18.071: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-sx5c container test-container-subpath-configmap-sx5c: STEP: delete the pod May 19 01:00:18.281: INFO: Waiting for pod pod-subpath-test-configmap-sx5c to disappear May 19 01:00:18.318: INFO: Pod pod-subpath-test-configmap-sx5c no longer exists STEP: Deleting pod pod-subpath-test-configmap-sx5c May 19 01:00:18.318: INFO: Deleting pod "pod-subpath-test-configmap-sx5c" in namespace "subpath-8167" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:00:18.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8167" for this suite. • [SLOW TEST:24.583 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":248,"skipped":4071,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:00:18.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 01:00:22.544: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:00:22.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-746" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":249,"skipped":4072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:00:22.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2012310d-57e4-42cd-9a20-07b993a74417 STEP: Creating configMap with name cm-test-opt-upd-7050df2c-41aa-4855-a7c3-1211d33b05d1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2012310d-57e4-42cd-9a20-07b993a74417 STEP: Updating configmap cm-test-opt-upd-7050df2c-41aa-4855-a7c3-1211d33b05d1 STEP: Creating configMap with name cm-test-opt-create-ffdad309-ed8f-4eb0-876a-a3677311b187 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:01:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2625" for this suite. • [SLOW TEST:94.725 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":250,"skipped":4095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:01:57.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3751, will wait for the garbage collector to delete the pods May 19 01:02:03.672: INFO: Deleting Job.batch foo took: 6.888145ms May 19 01:02:03.972: INFO: Terminating Job.batch foo pods took: 300.228202ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:02:37.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3751" for this suite. • [SLOW TEST:39.847 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":251,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:02:37.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 01:02:41.621: INFO: Waiting up to 5m0s for pod "client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698" in namespace "pods-3712" to be "Succeeded or Failed" May 19 01:02:41.638: INFO: Pod "client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698": Phase="Pending", Reason="", readiness=false. Elapsed: 17.240645ms May 19 01:02:43.642: INFO: Pod "client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021527037s May 19 01:02:45.646: INFO: Pod "client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025596728s STEP: Saw pod success May 19 01:02:45.647: INFO: Pod "client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698" satisfied condition "Succeeded or Failed" May 19 01:02:45.649: INFO: Trying to get logs from node latest-worker2 pod client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698 container env3cont: STEP: delete the pod May 19 01:02:45.830: INFO: Waiting for pod client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698 to disappear May 19 01:02:45.938: INFO: Pod client-envvars-a68deb88-e5f2-48a9-ae60-23b272e6c698 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:02:45.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3712" for this suite. • [SLOW TEST:8.562 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":252,"skipped":4181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:02:45.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9346 STEP: creating a selector STEP: Creating the service pods in kubernetes May 19 01:02:45.988: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 01:02:46.107: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 01:02:48.111: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 01:02:50.111: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:02:52.111: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:02:54.112: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:02:56.111: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:02:58.111: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:03:00.112: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 01:03:02.111: INFO: The status of Pod netserver-0 is Running (Ready = true) May 19 01:03:02.117: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 01:03:04.123: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 19 01:03:08.158: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostname&protocol=udp&host=10.244.1.245&port=8081&tries=1'] Namespace:pod-network-test-9346 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 01:03:08.158: INFO: >>> kubeConfig: /root/.kube/config I0519 01:03:08.192204 7 log.go:172] (0xc002c11ad0) (0xc002b06f00) Create stream I0519 01:03:08.192246 7 log.go:172] (0xc002c11ad0) (0xc002b06f00) Stream added, broadcasting: 1 I0519 01:03:08.195195 7 log.go:172] (0xc002c11ad0) Reply frame received for 1 I0519 01:03:08.195256 7 log.go:172] (0xc002c11ad0) (0xc002d40000) Create stream I0519 01:03:08.195287 7 log.go:172] (0xc002c11ad0) (0xc002d40000) Stream added, broadcasting: 3 I0519 01:03:08.196249 7 log.go:172] (0xc002c11ad0) Reply frame received for 3 I0519 01:03:08.196293 7 log.go:172] (0xc002c11ad0) (0xc002d40320) Create stream I0519 01:03:08.196312 7 log.go:172] (0xc002c11ad0) (0xc002d40320) Stream added, broadcasting: 5 I0519 01:03:08.197480 7 log.go:172] (0xc002c11ad0) Reply frame received for 5 I0519 01:03:08.274742 7 log.go:172] (0xc002c11ad0) Data frame received for 3 I0519 01:03:08.274764 7 log.go:172] (0xc002d40000) (3) Data frame handling I0519 01:03:08.274777 7 log.go:172] (0xc002d40000) (3) Data frame sent I0519 01:03:08.275110 7 log.go:172] (0xc002c11ad0) Data frame received for 3 I0519 01:03:08.275132 7 log.go:172] (0xc002d40000) (3) Data frame handling I0519 01:03:08.275329 7 log.go:172] (0xc002c11ad0) Data frame received for 5 I0519 01:03:08.275344 7 log.go:172] (0xc002d40320) (5) Data frame handling I0519 01:03:08.276545 7 log.go:172] (0xc002c11ad0) Data frame received for 1 I0519 01:03:08.276561 7 log.go:172] (0xc002b06f00) (1) Data frame handling I0519 01:03:08.276580 7 log.go:172] (0xc002b06f00) (1) Data frame sent I0519 01:03:08.276591 7 log.go:172] (0xc002c11ad0) (0xc002b06f00) Stream removed, broadcasting: 1 I0519 01:03:08.276637 7 log.go:172] (0xc002c11ad0) (0xc002b06f00) Stream removed, broadcasting: 1 I0519 01:03:08.276645 7 log.go:172] (0xc002c11ad0) (0xc002d40000) Stream removed, broadcasting: 3 I0519 01:03:08.276650 7 log.go:172] (0xc002c11ad0) (0xc002d40320) Stream removed, broadcasting: 5 I0519 01:03:08.276670 7 log.go:172] (0xc002c11ad0) Go away received May 19 01:03:08.276: INFO: Waiting for responses: map[] May 19 01:03:08.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostname&protocol=udp&host=10.244.2.246&port=8081&tries=1'] Namespace:pod-network-test-9346 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 19 01:03:08.279: INFO: >>> kubeConfig: /root/.kube/config I0519 01:03:08.307376 7 log.go:172] (0xc002bee420) (0xc002036640) Create stream I0519 01:03:08.307400 7 log.go:172] (0xc002bee420) (0xc002036640) Stream added, broadcasting: 1 I0519 01:03:08.309704 7 log.go:172] (0xc002bee420) Reply frame received for 1 I0519 01:03:08.309740 7 log.go:172] (0xc002bee420) (0xc001b00000) Create stream I0519 01:03:08.309759 7 log.go:172] (0xc002bee420) (0xc001b00000) Stream added, broadcasting: 3 I0519 01:03:08.310613 7 log.go:172] (0xc002bee420) Reply frame received for 3 I0519 01:03:08.310660 7 log.go:172] (0xc002bee420) (0xc0020366e0) Create stream I0519 01:03:08.310682 7 log.go:172] (0xc002bee420) (0xc0020366e0) Stream added, broadcasting: 5 I0519 01:03:08.311471 7 log.go:172] (0xc002bee420) Reply frame received for 5 I0519 01:03:08.378111 7 log.go:172] (0xc002bee420) Data frame received for 3 I0519 01:03:08.378132 7 log.go:172] (0xc001b00000) (3) Data frame handling I0519 01:03:08.378145 7 log.go:172] (0xc001b00000) (3) Data frame sent I0519 01:03:08.378795 7 log.go:172] (0xc002bee420) Data frame received for 5 I0519 01:03:08.378808 7 log.go:172] (0xc0020366e0) (5) Data frame handling I0519 01:03:08.378826 7 log.go:172] (0xc002bee420) Data frame received for 3 I0519 01:03:08.378839 7 log.go:172] (0xc001b00000) (3) Data frame handling I0519 01:03:08.380452 7 log.go:172] (0xc002bee420) Data frame received for 1 I0519 01:03:08.380465 7 log.go:172] (0xc002036640) (1) Data frame handling I0519 01:03:08.380475 7 log.go:172] (0xc002036640) (1) Data frame sent I0519 01:03:08.380483 7 log.go:172] (0xc002bee420) (0xc002036640) Stream removed, broadcasting: 1 I0519 01:03:08.380492 7 log.go:172] (0xc002bee420) Go away received I0519 01:03:08.380713 7 log.go:172] (0xc002bee420) (0xc002036640) Stream removed, broadcasting: 1 I0519 01:03:08.380744 7 log.go:172] (0xc002bee420) (0xc001b00000) Stream removed, broadcasting: 3 I0519 01:03:08.380757 7 log.go:172] (0xc002bee420) (0xc0020366e0) Stream removed, broadcasting: 5 May 19 01:03:08.380: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:08.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9346" for this suite. • [SLOW TEST:22.442 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4207,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:08.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 19 01:03:08.460: INFO: Waiting up to 5m0s for pod "pod-dd970279-091a-43c8-bf85-50f694fd71a4" in namespace "emptydir-2617" to be "Succeeded or Failed" May 19 01:03:08.463: INFO: Pod "pod-dd970279-091a-43c8-bf85-50f694fd71a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.315411ms May 19 01:03:10.467: INFO: Pod "pod-dd970279-091a-43c8-bf85-50f694fd71a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007110751s May 19 01:03:12.471: INFO: Pod "pod-dd970279-091a-43c8-bf85-50f694fd71a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011181543s STEP: Saw pod success May 19 01:03:12.471: INFO: Pod "pod-dd970279-091a-43c8-bf85-50f694fd71a4" satisfied condition "Succeeded or Failed" May 19 01:03:12.474: INFO: Trying to get logs from node latest-worker2 pod pod-dd970279-091a-43c8-bf85-50f694fd71a4 container test-container: STEP: delete the pod May 19 01:03:12.511: INFO: Waiting for pod pod-dd970279-091a-43c8-bf85-50f694fd71a4 to disappear May 19 01:03:12.596: INFO: Pod pod-dd970279-091a-43c8-bf85-50f694fd71a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:12.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2617" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:12.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 01:03:12.688: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9884 I0519 01:03:12.766371 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9884, replica count: 1 I0519 01:03:13.816763 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 01:03:14.816980 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 01:03:15.817344 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 01:03:16.817583 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 01:03:17.817804 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 01:03:17.949: INFO: Created: latency-svc-skrmr May 19 01:03:17.993: INFO: Got endpoints: latency-svc-skrmr [75.447935ms] May 19 01:03:18.019: INFO: Created: latency-svc-62v4b May 19 01:03:18.033: INFO: Got endpoints: latency-svc-62v4b [40.077191ms] May 19 01:03:18.130: INFO: Created: latency-svc-n6q8p May 19 01:03:18.158: INFO: Created: latency-svc-585tg May 19 01:03:18.159: INFO: Got endpoints: latency-svc-n6q8p [165.801865ms] May 19 01:03:18.171: INFO: Got endpoints: latency-svc-585tg [177.56476ms] May 19 01:03:18.219: INFO: Created: latency-svc-xqqhv May 19 01:03:18.262: INFO: Got endpoints: latency-svc-xqqhv [268.608006ms] May 19 01:03:18.293: INFO: Created: latency-svc-68gt6 May 19 01:03:18.323: INFO: Got endpoints: latency-svc-68gt6 [329.815741ms] May 19 01:03:18.358: INFO: Created: latency-svc-c8qdz May 19 01:03:18.399: INFO: Got endpoints: latency-svc-c8qdz [405.54585ms] May 19 01:03:18.415: INFO: Created: latency-svc-kx9c7 May 19 01:03:18.430: INFO: Got endpoints: latency-svc-kx9c7 [436.328939ms] May 19 01:03:18.458: INFO: Created: latency-svc-rpl8d May 19 01:03:18.496: INFO: Got endpoints: latency-svc-rpl8d [502.93304ms] May 19 01:03:18.555: INFO: Created: latency-svc-shg2p May 19 01:03:18.563: INFO: Got endpoints: latency-svc-shg2p [569.790418ms] May 19 01:03:18.592: INFO: Created: latency-svc-ql4lg May 19 01:03:18.606: INFO: Got endpoints: latency-svc-ql4lg [612.371363ms] May 19 01:03:18.634: INFO: Created: latency-svc-gcmcp May 19 01:03:18.646: INFO: Got endpoints: latency-svc-gcmcp [653.025746ms] May 19 01:03:18.703: INFO: Created: latency-svc-8gjm4 May 19 01:03:18.713: INFO: Got endpoints: latency-svc-8gjm4 [719.51115ms] May 19 01:03:18.755: INFO: Created: latency-svc-d5wrv May 19 01:03:18.824: INFO: Got endpoints: latency-svc-d5wrv [830.437489ms] May 19 01:03:18.844: INFO: Created: latency-svc-lq46m May 19 01:03:18.852: INFO: Got endpoints: latency-svc-lq46m [858.657896ms] May 19 01:03:18.914: INFO: Created: latency-svc-mg54w May 19 01:03:18.980: INFO: Got endpoints: latency-svc-mg54w [986.326952ms] May 19 01:03:18.984: INFO: Created: latency-svc-9r9v4 May 19 01:03:18.997: INFO: Got endpoints: latency-svc-9r9v4 [963.744859ms] May 19 01:03:19.038: INFO: Created: latency-svc-ss9qz May 19 01:03:19.051: INFO: Got endpoints: latency-svc-ss9qz [891.966253ms] May 19 01:03:19.070: INFO: Created: latency-svc-2jwf4 May 19 01:03:19.129: INFO: Got endpoints: latency-svc-2jwf4 [958.579939ms] May 19 01:03:19.143: INFO: Created: latency-svc-ltw6d May 19 01:03:19.158: INFO: Got endpoints: latency-svc-ltw6d [896.167976ms] May 19 01:03:19.187: INFO: Created: latency-svc-nl99r May 19 01:03:19.217: INFO: Got endpoints: latency-svc-nl99r [894.379923ms] May 19 01:03:19.289: INFO: Created: latency-svc-zv8gd May 19 01:03:19.327: INFO: Got endpoints: latency-svc-zv8gd [928.190765ms] May 19 01:03:19.355: INFO: Created: latency-svc-h2zmw May 19 01:03:19.424: INFO: Got endpoints: latency-svc-h2zmw [994.530864ms] May 19 01:03:19.425: INFO: Created: latency-svc-jk567 May 19 01:03:19.437: INFO: Got endpoints: latency-svc-jk567 [940.266732ms] May 19 01:03:19.481: INFO: Created: latency-svc-8zjxx May 19 01:03:19.496: INFO: Got endpoints: latency-svc-8zjxx [933.220321ms] May 19 01:03:19.567: INFO: Created: latency-svc-pls6l May 19 01:03:19.579: INFO: Got endpoints: latency-svc-pls6l [973.631558ms] May 19 01:03:19.646: INFO: Created: latency-svc-8jv7b May 19 01:03:19.771: INFO: Got endpoints: latency-svc-8jv7b [1.124184724s] May 19 01:03:19.804: INFO: Created: latency-svc-b97f8 May 19 01:03:19.821: INFO: Got endpoints: latency-svc-b97f8 [1.107530393s] May 19 01:03:19.920: INFO: Created: latency-svc-wnss7 May 19 01:03:19.936: INFO: Got endpoints: latency-svc-wnss7 [1.112047479s] May 19 01:03:19.958: INFO: Created: latency-svc-9z8wh May 19 01:03:19.975: INFO: Got endpoints: latency-svc-9z8wh [1.123119083s] May 19 01:03:20.019: INFO: Created: latency-svc-fhw42 May 19 01:03:20.052: INFO: Got endpoints: latency-svc-fhw42 [1.07214695s] May 19 01:03:20.063: INFO: Created: latency-svc-kmjr6 May 19 01:03:20.078: INFO: Got endpoints: latency-svc-kmjr6 [1.081288793s] May 19 01:03:20.099: INFO: Created: latency-svc-mc99h May 19 01:03:20.130: INFO: Got endpoints: latency-svc-mc99h [1.078593961s] May 19 01:03:20.207: INFO: Created: latency-svc-pz42w May 19 01:03:20.222: INFO: Got endpoints: latency-svc-pz42w [1.092787132s] May 19 01:03:20.262: INFO: Created: latency-svc-24clc May 19 01:03:20.280: INFO: Got endpoints: latency-svc-24clc [1.121721399s] May 19 01:03:20.303: INFO: Created: latency-svc-g7dfc May 19 01:03:20.369: INFO: Got endpoints: latency-svc-g7dfc [1.151496871s] May 19 01:03:20.396: INFO: Created: latency-svc-ddq9v May 19 01:03:20.426: INFO: Got endpoints: latency-svc-ddq9v [1.098820262s] May 19 01:03:20.469: INFO: Created: latency-svc-9txvq May 19 01:03:20.555: INFO: Got endpoints: latency-svc-9txvq [1.130206278s] May 19 01:03:20.559: INFO: Created: latency-svc-spmzd May 19 01:03:20.567: INFO: Got endpoints: latency-svc-spmzd [1.130050037s] May 19 01:03:20.588: INFO: Created: latency-svc-dd59v May 19 01:03:20.604: INFO: Got endpoints: latency-svc-dd59v [1.107235471s] May 19 01:03:20.630: INFO: Created: latency-svc-p22v4 May 19 01:03:20.646: INFO: Got endpoints: latency-svc-p22v4 [1.066413127s] May 19 01:03:20.754: INFO: Created: latency-svc-cg86v May 19 01:03:20.766: INFO: Got endpoints: latency-svc-cg86v [994.840891ms] May 19 01:03:20.846: INFO: Created: latency-svc-jrd7h May 19 01:03:20.850: INFO: Got endpoints: latency-svc-jrd7h [1.029233185s] May 19 01:03:20.934: INFO: Created: latency-svc-v28nz May 19 01:03:21.004: INFO: Got endpoints: latency-svc-v28nz [1.067431006s] May 19 01:03:21.020: INFO: Created: latency-svc-zdgtm May 19 01:03:21.091: INFO: Got endpoints: latency-svc-zdgtm [1.115878336s] May 19 01:03:21.178: INFO: Created: latency-svc-x2dpx May 19 01:03:21.215: INFO: Got endpoints: latency-svc-x2dpx [1.163263479s] May 19 01:03:21.217: INFO: Created: latency-svc-2pt2f May 19 01:03:21.321: INFO: Got endpoints: latency-svc-2pt2f [1.24282358s] May 19 01:03:21.325: INFO: Created: latency-svc-tb8xg May 19 01:03:21.342: INFO: Got endpoints: latency-svc-tb8xg [1.212077155s] May 19 01:03:21.377: INFO: Created: latency-svc-fgjm7 May 19 01:03:21.391: INFO: Got endpoints: latency-svc-fgjm7 [1.168840478s] May 19 01:03:21.407: INFO: Created: latency-svc-9h9h9 May 19 01:03:21.477: INFO: Got endpoints: latency-svc-9h9h9 [1.196638841s] May 19 01:03:21.479: INFO: Created: latency-svc-qn5d6 May 19 01:03:21.512: INFO: Got endpoints: latency-svc-qn5d6 [1.143327657s] May 19 01:03:21.563: INFO: Created: latency-svc-9s47c May 19 01:03:21.646: INFO: Got endpoints: latency-svc-9s47c [1.21967361s] May 19 01:03:21.711: INFO: Created: latency-svc-pmq9g May 19 01:03:21.796: INFO: Got endpoints: latency-svc-pmq9g [1.240981344s] May 19 01:03:21.842: INFO: Created: latency-svc-559s2 May 19 01:03:21.860: INFO: Got endpoints: latency-svc-559s2 [1.293302048s] May 19 01:03:21.933: INFO: Created: latency-svc-wxdxw May 19 01:03:21.936: INFO: Got endpoints: latency-svc-wxdxw [1.332554716s] May 19 01:03:21.977: INFO: Created: latency-svc-snrcj May 19 01:03:21.994: INFO: Got endpoints: latency-svc-snrcj [1.347768499s] May 19 01:03:22.007: INFO: Created: latency-svc-gcl6v May 19 01:03:22.023: INFO: Got endpoints: latency-svc-gcl6v [1.257631365s] May 19 01:03:22.064: INFO: Created: latency-svc-zh6b8 May 19 01:03:22.071: INFO: Got endpoints: latency-svc-zh6b8 [1.220765395s] May 19 01:03:22.094: INFO: Created: latency-svc-ddhhz May 19 01:03:22.108: INFO: Got endpoints: latency-svc-ddhhz [1.104290705s] May 19 01:03:22.122: INFO: Created: latency-svc-g7x2r May 19 01:03:22.147: INFO: Got endpoints: latency-svc-g7x2r [1.055552588s] May 19 01:03:22.215: INFO: Created: latency-svc-hnhhp May 19 01:03:22.238: INFO: Got endpoints: latency-svc-hnhhp [1.022537214s] May 19 01:03:22.240: INFO: Created: latency-svc-hvf6c May 19 01:03:22.262: INFO: Got endpoints: latency-svc-hvf6c [941.196932ms] May 19 01:03:22.307: INFO: Created: latency-svc-25l6h May 19 01:03:22.357: INFO: Got endpoints: latency-svc-25l6h [1.015555825s] May 19 01:03:22.386: INFO: Created: latency-svc-wxsbs May 19 01:03:22.397: INFO: Got endpoints: latency-svc-wxsbs [1.006141313s] May 19 01:03:22.443: INFO: Created: latency-svc-xz6gn May 19 01:03:22.501: INFO: Got endpoints: latency-svc-xz6gn [1.024057253s] May 19 01:03:22.530: INFO: Created: latency-svc-pn8b8 May 19 01:03:22.554: INFO: Got endpoints: latency-svc-pn8b8 [1.041905696s] May 19 01:03:22.572: INFO: Created: latency-svc-4qdnq May 19 01:03:22.584: INFO: Got endpoints: latency-svc-4qdnq [938.136736ms] May 19 01:03:22.687: INFO: Created: latency-svc-vhk5c May 19 01:03:22.700: INFO: Got endpoints: latency-svc-vhk5c [904.152636ms] May 19 01:03:22.730: INFO: Created: latency-svc-wftdw May 19 01:03:22.750: INFO: Got endpoints: latency-svc-wftdw [889.81278ms] May 19 01:03:22.831: INFO: Created: latency-svc-vf4m5 May 19 01:03:22.860: INFO: Got endpoints: latency-svc-vf4m5 [923.472048ms] May 19 01:03:22.861: INFO: Created: latency-svc-tbd6f May 19 01:03:22.892: INFO: Got endpoints: latency-svc-tbd6f [898.645595ms] May 19 01:03:22.923: INFO: Created: latency-svc-4t9db May 19 01:03:22.979: INFO: Got endpoints: latency-svc-4t9db [956.020729ms] May 19 01:03:22.984: INFO: Created: latency-svc-hft6s May 19 01:03:22.998: INFO: Got endpoints: latency-svc-hft6s [927.35848ms] May 19 01:03:23.055: INFO: Created: latency-svc-hsmrk May 19 01:03:23.160: INFO: Got endpoints: latency-svc-hsmrk [1.05219651s] May 19 01:03:23.198: INFO: Created: latency-svc-kxkcn May 19 01:03:23.224: INFO: Got endpoints: latency-svc-kxkcn [1.077229442s] May 19 01:03:23.256: INFO: Created: latency-svc-hwm9w May 19 01:03:23.333: INFO: Got endpoints: latency-svc-hwm9w [1.095232671s] May 19 01:03:23.346: INFO: Created: latency-svc-wnw57 May 19 01:03:23.372: INFO: Got endpoints: latency-svc-wnw57 [1.10992876s] May 19 01:03:23.507: INFO: Created: latency-svc-d96xs May 19 01:03:23.510: INFO: Got endpoints: latency-svc-d96xs [1.152998224s] May 19 01:03:23.565: INFO: Created: latency-svc-cgbjv May 19 01:03:23.590: INFO: Got endpoints: latency-svc-cgbjv [1.19274109s] May 19 01:03:23.669: INFO: Created: latency-svc-p2vpw May 19 01:03:23.685: INFO: Got endpoints: latency-svc-p2vpw [1.183614236s] May 19 01:03:23.714: INFO: Created: latency-svc-bdw74 May 19 01:03:23.740: INFO: Got endpoints: latency-svc-bdw74 [1.185316478s] May 19 01:03:23.926: INFO: Created: latency-svc-2v5fn May 19 01:03:23.949: INFO: Got endpoints: latency-svc-2v5fn [1.365204262s] May 19 01:03:23.991: INFO: Created: latency-svc-msnqb May 19 01:03:24.004: INFO: Got endpoints: latency-svc-msnqb [1.304225186s] May 19 01:03:24.021: INFO: Created: latency-svc-vnj46 May 19 01:03:24.082: INFO: Got endpoints: latency-svc-vnj46 [1.33197748s] May 19 01:03:24.094: INFO: Created: latency-svc-lpjbf May 19 01:03:24.100: INFO: Got endpoints: latency-svc-lpjbf [1.240245216s] May 19 01:03:24.123: INFO: Created: latency-svc-kzl4d May 19 01:03:24.243: INFO: Got endpoints: latency-svc-kzl4d [1.350979483s] May 19 01:03:24.245: INFO: Created: latency-svc-m82pj May 19 01:03:24.261: INFO: Got endpoints: latency-svc-m82pj [1.28158538s] May 19 01:03:24.285: INFO: Created: latency-svc-7vmx4 May 19 01:03:24.298: INFO: Got endpoints: latency-svc-7vmx4 [1.299964071s] May 19 01:03:24.319: INFO: Created: latency-svc-k47tr May 19 01:03:24.337: INFO: Got endpoints: latency-svc-k47tr [1.17626841s] May 19 01:03:24.387: INFO: Created: latency-svc-sgw5n May 19 01:03:24.394: INFO: Got endpoints: latency-svc-sgw5n [1.170276191s] May 19 01:03:24.411: INFO: Created: latency-svc-qx2nj May 19 01:03:24.425: INFO: Got endpoints: latency-svc-qx2nj [1.091306262s] May 19 01:03:24.441: INFO: Created: latency-svc-l7g5h May 19 01:03:24.475: INFO: Got endpoints: latency-svc-l7g5h [1.102019699s] May 19 01:03:24.561: INFO: Created: latency-svc-qq252 May 19 01:03:24.603: INFO: Got endpoints: latency-svc-qq252 [1.092860836s] May 19 01:03:24.604: INFO: Created: latency-svc-65wpv May 19 01:03:24.618: INFO: Got endpoints: latency-svc-65wpv [1.027630338s] May 19 01:03:24.641: INFO: Created: latency-svc-gjvpg May 19 01:03:24.711: INFO: Got endpoints: latency-svc-gjvpg [1.02639155s] May 19 01:03:24.751: INFO: Created: latency-svc-rsdvs May 19 01:03:24.762: INFO: Got endpoints: latency-svc-rsdvs [1.022217808s] May 19 01:03:24.800: INFO: Created: latency-svc-mgjzb May 19 01:03:24.878: INFO: Got endpoints: latency-svc-mgjzb [928.896778ms] May 19 01:03:24.884: INFO: Created: latency-svc-rv6sv May 19 01:03:24.894: INFO: Got endpoints: latency-svc-rv6sv [890.247385ms] May 19 01:03:24.912: INFO: Created: latency-svc-46vsj May 19 01:03:24.926: INFO: Got endpoints: latency-svc-46vsj [843.622258ms] May 19 01:03:24.960: INFO: Created: latency-svc-vzz8t May 19 01:03:25.058: INFO: Got endpoints: latency-svc-vzz8t [957.404498ms] May 19 01:03:25.064: INFO: Created: latency-svc-qxqq9 May 19 01:03:25.075: INFO: Got endpoints: latency-svc-qxqq9 [831.948417ms] May 19 01:03:25.095: INFO: Created: latency-svc-f9jht May 19 01:03:25.106: INFO: Got endpoints: latency-svc-f9jht [845.00128ms] May 19 01:03:25.128: INFO: Created: latency-svc-khg42 May 19 01:03:25.142: INFO: Got endpoints: latency-svc-khg42 [844.068097ms] May 19 01:03:25.207: INFO: Created: latency-svc-cg4zw May 19 01:03:25.227: INFO: Got endpoints: latency-svc-cg4zw [890.642735ms] May 19 01:03:25.247: INFO: Created: latency-svc-982g5 May 19 01:03:25.269: INFO: Got endpoints: latency-svc-982g5 [874.606347ms] May 19 01:03:25.296: INFO: Created: latency-svc-pljdt May 19 01:03:25.382: INFO: Got endpoints: latency-svc-pljdt [956.875901ms] May 19 01:03:25.386: INFO: Created: latency-svc-fzbl5 May 19 01:03:25.395: INFO: Got endpoints: latency-svc-fzbl5 [920.454642ms] May 19 01:03:25.418: INFO: Created: latency-svc-szzjz May 19 01:03:25.432: INFO: Got endpoints: latency-svc-szzjz [828.099385ms] May 19 01:03:25.449: INFO: Created: latency-svc-qgjnf May 19 01:03:25.462: INFO: Got endpoints: latency-svc-qgjnf [844.13518ms] May 19 01:03:25.476: INFO: Created: latency-svc-m5x47 May 19 01:03:25.543: INFO: Got endpoints: latency-svc-m5x47 [832.400122ms] May 19 01:03:25.546: INFO: Created: latency-svc-jg6qq May 19 01:03:25.552: INFO: Got endpoints: latency-svc-jg6qq [790.305769ms] May 19 01:03:25.569: INFO: Created: latency-svc-69m5j May 19 01:03:25.583: INFO: Got endpoints: latency-svc-69m5j [704.989212ms] May 19 01:03:25.599: INFO: Created: latency-svc-ng59k May 19 01:03:25.617: INFO: Got endpoints: latency-svc-ng59k [722.754924ms] May 19 01:03:25.704: INFO: Created: latency-svc-hszpc May 19 01:03:25.741: INFO: Got endpoints: latency-svc-hszpc [814.939918ms] May 19 01:03:25.749: INFO: Created: latency-svc-7txlx May 19 01:03:25.785: INFO: Got endpoints: latency-svc-7txlx [727.424012ms] May 19 01:03:25.878: INFO: Created: latency-svc-cg5pl May 19 01:03:25.903: INFO: Got endpoints: latency-svc-cg5pl [827.283317ms] May 19 01:03:25.927: INFO: Created: latency-svc-m49vv May 19 01:03:25.938: INFO: Got endpoints: latency-svc-m49vv [832.181946ms] May 19 01:03:25.956: INFO: Created: latency-svc-n4s87 May 19 01:03:25.969: INFO: Got endpoints: latency-svc-n4s87 [826.40206ms] May 19 01:03:26.046: INFO: Created: latency-svc-tkwvh May 19 01:03:26.048: INFO: Got endpoints: latency-svc-tkwvh [821.152195ms] May 19 01:03:26.076: INFO: Created: latency-svc-pnkpt May 19 01:03:26.089: INFO: Got endpoints: latency-svc-pnkpt [819.713681ms] May 19 01:03:26.106: INFO: Created: latency-svc-bgtrd May 19 01:03:26.119: INFO: Got endpoints: latency-svc-bgtrd [737.453657ms] May 19 01:03:26.136: INFO: Created: latency-svc-xc5xz May 19 01:03:26.189: INFO: Got endpoints: latency-svc-xc5xz [794.17139ms] May 19 01:03:26.205: INFO: Created: latency-svc-j6dkw May 19 01:03:26.216: INFO: Got endpoints: latency-svc-j6dkw [783.829281ms] May 19 01:03:26.268: INFO: Created: latency-svc-smdrl May 19 01:03:26.282: INFO: Got endpoints: latency-svc-smdrl [820.408254ms] May 19 01:03:26.367: INFO: Created: latency-svc-5nfrj May 19 01:03:26.391: INFO: Got endpoints: latency-svc-5nfrj [847.735552ms] May 19 01:03:26.413: INFO: Created: latency-svc-qbr7t May 19 01:03:26.427: INFO: Got endpoints: latency-svc-qbr7t [874.098784ms] May 19 01:03:26.442: INFO: Created: latency-svc-2kthm May 19 01:03:26.514: INFO: Got endpoints: latency-svc-2kthm [930.529652ms] May 19 01:03:26.521: INFO: Created: latency-svc-6sbxr May 19 01:03:26.541: INFO: Got endpoints: latency-svc-6sbxr [923.641414ms] May 19 01:03:26.571: INFO: Created: latency-svc-jh8z8 May 19 01:03:26.585: INFO: Got endpoints: latency-svc-jh8z8 [844.01615ms] May 19 01:03:26.604: INFO: Created: latency-svc-sqrkh May 19 01:03:26.675: INFO: Got endpoints: latency-svc-sqrkh [889.491438ms] May 19 01:03:26.676: INFO: Created: latency-svc-xc96v May 19 01:03:26.680: INFO: Got endpoints: latency-svc-xc96v [777.709107ms] May 19 01:03:26.721: INFO: Created: latency-svc-wgcsr May 19 01:03:26.735: INFO: Got endpoints: latency-svc-wgcsr [797.117222ms] May 19 01:03:26.763: INFO: Created: latency-svc-ns85m May 19 01:03:26.885: INFO: Got endpoints: latency-svc-ns85m [916.263901ms] May 19 01:03:26.889: INFO: Created: latency-svc-ntfbl May 19 01:03:26.904: INFO: Got endpoints: latency-svc-ntfbl [855.2955ms] May 19 01:03:26.943: INFO: Created: latency-svc-54wzl May 19 01:03:26.958: INFO: Got endpoints: latency-svc-54wzl [869.113976ms] May 19 01:03:26.973: INFO: Created: latency-svc-rgtth May 19 01:03:26.983: INFO: Got endpoints: latency-svc-rgtth [863.355189ms] May 19 01:03:27.052: INFO: Created: latency-svc-vbh8r May 19 01:03:27.075: INFO: Got endpoints: latency-svc-vbh8r [885.68398ms] May 19 01:03:27.098: INFO: Created: latency-svc-jl8vr May 19 01:03:27.121: INFO: Got endpoints: latency-svc-jl8vr [905.442071ms] May 19 01:03:27.202: INFO: Created: latency-svc-clbqh May 19 01:03:27.242: INFO: Created: latency-svc-nck29 May 19 01:03:27.242: INFO: Got endpoints: latency-svc-clbqh [959.274841ms] May 19 01:03:27.261: INFO: Got endpoints: latency-svc-nck29 [869.816789ms] May 19 01:03:27.285: INFO: Created: latency-svc-rclmd May 19 01:03:27.296: INFO: Got endpoints: latency-svc-rclmd [869.15879ms] May 19 01:03:27.361: INFO: Created: latency-svc-hjtq4 May 19 01:03:27.374: INFO: Got endpoints: latency-svc-hjtq4 [860.259674ms] May 19 01:03:27.403: INFO: Created: latency-svc-hfmph May 19 01:03:27.422: INFO: Got endpoints: latency-svc-hfmph [881.521123ms] May 19 01:03:27.483: INFO: Created: latency-svc-tkrkg May 19 01:03:27.507: INFO: Created: latency-svc-7bpv4 May 19 01:03:27.507: INFO: Got endpoints: latency-svc-tkrkg [922.422832ms] May 19 01:03:27.519: INFO: Got endpoints: latency-svc-7bpv4 [843.939497ms] May 19 01:03:27.639: INFO: Created: latency-svc-2p2cn May 19 01:03:27.643: INFO: Got endpoints: latency-svc-2p2cn [962.480373ms] May 19 01:03:27.669: INFO: Created: latency-svc-8z8wv May 19 01:03:27.683: INFO: Got endpoints: latency-svc-8z8wv [947.438708ms] May 19 01:03:27.807: INFO: Created: latency-svc-dn8rk May 19 01:03:27.828: INFO: Got endpoints: latency-svc-dn8rk [943.292487ms] May 19 01:03:27.873: INFO: Created: latency-svc-l2xq5 May 19 01:03:27.886: INFO: Got endpoints: latency-svc-l2xq5 [982.510438ms] May 19 01:03:27.950: INFO: Created: latency-svc-qszzx May 19 01:03:27.973: INFO: Got endpoints: latency-svc-qszzx [1.014627054s] May 19 01:03:27.973: INFO: Created: latency-svc-2fbdd May 19 01:03:27.997: INFO: Got endpoints: latency-svc-2fbdd [1.014667164s] May 19 01:03:28.027: INFO: Created: latency-svc-smnqz May 19 01:03:28.037: INFO: Got endpoints: latency-svc-smnqz [962.037011ms] May 19 01:03:28.106: INFO: Created: latency-svc-rpch7 May 19 01:03:28.131: INFO: Created: latency-svc-6589g May 19 01:03:28.131: INFO: Got endpoints: latency-svc-rpch7 [1.010114744s] May 19 01:03:28.149: INFO: Got endpoints: latency-svc-6589g [907.153454ms] May 19 01:03:28.176: INFO: Created: latency-svc-wcfvd May 19 01:03:28.188: INFO: Got endpoints: latency-svc-wcfvd [926.405723ms] May 19 01:03:28.250: INFO: Created: latency-svc-dtsh9 May 19 01:03:28.252: INFO: Got endpoints: latency-svc-dtsh9 [956.471747ms] May 19 01:03:28.306: INFO: Created: latency-svc-ngn6r May 19 01:03:28.323: INFO: Got endpoints: latency-svc-ngn6r [948.987277ms] May 19 01:03:28.349: INFO: Created: latency-svc-sr8f8 May 19 01:03:28.415: INFO: Got endpoints: latency-svc-sr8f8 [992.893274ms] May 19 01:03:28.447: INFO: Created: latency-svc-l5tkn May 19 01:03:28.459: INFO: Got endpoints: latency-svc-l5tkn [951.551913ms] May 19 01:03:28.479: INFO: Created: latency-svc-gjt5r May 19 01:03:28.502: INFO: Got endpoints: latency-svc-gjt5r [983.038821ms] May 19 01:03:28.550: INFO: Created: latency-svc-7smfn May 19 01:03:28.552: INFO: Got endpoints: latency-svc-7smfn [909.016747ms] May 19 01:03:28.602: INFO: Created: latency-svc-7vg4x May 19 01:03:28.615: INFO: Got endpoints: latency-svc-7vg4x [932.390237ms] May 19 01:03:28.632: INFO: Created: latency-svc-22jqr May 19 01:03:28.647: INFO: Got endpoints: latency-svc-22jqr [818.732119ms] May 19 01:03:28.704: INFO: Created: latency-svc-mjk6q May 19 01:03:28.708: INFO: Got endpoints: latency-svc-mjk6q [821.34607ms] May 19 01:03:28.743: INFO: Created: latency-svc-7zrtr May 19 01:03:28.765: INFO: Got endpoints: latency-svc-7zrtr [792.121149ms] May 19 01:03:28.866: INFO: Created: latency-svc-g5mx5 May 19 01:03:28.887: INFO: Got endpoints: latency-svc-g5mx5 [889.653925ms] May 19 01:03:28.935: INFO: Created: latency-svc-sspxq May 19 01:03:28.966: INFO: Got endpoints: latency-svc-sspxq [928.386468ms] May 19 01:03:29.016: INFO: Created: latency-svc-fx4wx May 19 01:03:29.041: INFO: Got endpoints: latency-svc-fx4wx [909.586414ms] May 19 01:03:29.043: INFO: Created: latency-svc-m6g5f May 19 01:03:29.065: INFO: Got endpoints: latency-svc-m6g5f [915.802321ms] May 19 01:03:29.101: INFO: Created: latency-svc-pdg8b May 19 01:03:29.109: INFO: Got endpoints: latency-svc-pdg8b [921.58149ms] May 19 01:03:29.178: INFO: Created: latency-svc-krrqs May 19 01:03:29.187: INFO: Got endpoints: latency-svc-krrqs [934.521934ms] May 19 01:03:29.245: INFO: Created: latency-svc-5mk4w May 19 01:03:29.371: INFO: Got endpoints: latency-svc-5mk4w [1.048043548s] May 19 01:03:29.373: INFO: Created: latency-svc-wh62d May 19 01:03:29.385: INFO: Got endpoints: latency-svc-wh62d [969.379715ms] May 19 01:03:29.415: INFO: Created: latency-svc-48nc5 May 19 01:03:29.443: INFO: Got endpoints: latency-svc-48nc5 [984.067613ms] May 19 01:03:29.466: INFO: Created: latency-svc-cpjvv May 19 01:03:29.507: INFO: Got endpoints: latency-svc-cpjvv [1.004736718s] May 19 01:03:29.523: INFO: Created: latency-svc-jt6gh May 19 01:03:29.536: INFO: Got endpoints: latency-svc-jt6gh [983.883893ms] May 19 01:03:29.554: INFO: Created: latency-svc-qfjmd May 19 01:03:29.566: INFO: Got endpoints: latency-svc-qfjmd [951.088808ms] May 19 01:03:29.583: INFO: Created: latency-svc-dqzfl May 19 01:03:29.650: INFO: Got endpoints: latency-svc-dqzfl [1.003072426s] May 19 01:03:29.666: INFO: Created: latency-svc-ph5hx May 19 01:03:29.681: INFO: Got endpoints: latency-svc-ph5hx [973.316723ms] May 19 01:03:29.716: INFO: Created: latency-svc-rfvmg May 19 01:03:29.729: INFO: Got endpoints: latency-svc-rfvmg [964.376705ms] May 19 01:03:29.812: INFO: Created: latency-svc-g6q9l May 19 01:03:29.863: INFO: Got endpoints: latency-svc-g6q9l [976.011873ms] May 19 01:03:29.864: INFO: Created: latency-svc-6q6sf May 19 01:03:29.980: INFO: Got endpoints: latency-svc-6q6sf [1.014639631s] May 19 01:03:30.008: INFO: Created: latency-svc-9d25h May 19 01:03:30.018: INFO: Got endpoints: latency-svc-9d25h [976.711691ms] May 19 01:03:30.036: INFO: Created: latency-svc-ctvnx May 19 01:03:30.049: INFO: Got endpoints: latency-svc-ctvnx [984.266914ms] May 19 01:03:30.067: INFO: Created: latency-svc-98lhn May 19 01:03:30.130: INFO: Got endpoints: latency-svc-98lhn [1.020796862s] May 19 01:03:30.148: INFO: Created: latency-svc-4449q May 19 01:03:30.163: INFO: Got endpoints: latency-svc-4449q [976.019846ms] May 19 01:03:30.202: INFO: Created: latency-svc-ntlqx May 19 01:03:30.224: INFO: Got endpoints: latency-svc-ntlqx [852.742725ms] May 19 01:03:30.315: INFO: Created: latency-svc-hwlmh May 19 01:03:30.331: INFO: Got endpoints: latency-svc-hwlmh [945.655186ms] May 19 01:03:30.351: INFO: Created: latency-svc-b4sqk May 19 01:03:30.362: INFO: Got endpoints: latency-svc-b4sqk [919.18566ms] May 19 01:03:30.393: INFO: Created: latency-svc-5k9jl May 19 01:03:30.501: INFO: Got endpoints: latency-svc-5k9jl [994.764576ms] May 19 01:03:30.519: INFO: Created: latency-svc-n8rxw May 19 01:03:30.548: INFO: Got endpoints: latency-svc-n8rxw [1.012267258s] May 19 01:03:30.674: INFO: Created: latency-svc-klpp5 May 19 01:03:30.687: INFO: Got endpoints: latency-svc-klpp5 [1.120749172s] May 19 01:03:30.745: INFO: Created: latency-svc-5b96w May 19 01:03:30.759: INFO: Got endpoints: latency-svc-5b96w [1.108599217s] May 19 01:03:30.869: INFO: Created: latency-svc-ss9qj May 19 01:03:30.880: INFO: Got endpoints: latency-svc-ss9qj [1.199005397s] May 19 01:03:30.943: INFO: Created: latency-svc-kmcwb May 19 01:03:31.022: INFO: Got endpoints: latency-svc-kmcwb [1.292457696s] May 19 01:03:31.047: INFO: Created: latency-svc-dfq9j May 19 01:03:31.067: INFO: Got endpoints: latency-svc-dfq9j [1.203713638s] May 19 01:03:31.086: INFO: Created: latency-svc-w8vmz May 19 01:03:31.100: INFO: Got endpoints: latency-svc-w8vmz [1.119662951s] May 19 01:03:31.120: INFO: Created: latency-svc-9g2qz May 19 01:03:31.178: INFO: Got endpoints: latency-svc-9g2qz [1.159610781s] May 19 01:03:31.199: INFO: Created: latency-svc-dbbf5 May 19 01:03:31.211: INFO: Got endpoints: latency-svc-dbbf5 [1.16177934s] May 19 01:03:31.262: INFO: Created: latency-svc-zdbcf May 19 01:03:31.357: INFO: Got endpoints: latency-svc-zdbcf [1.227227545s] May 19 01:03:31.363: INFO: Created: latency-svc-cp48d May 19 01:03:31.383: INFO: Got endpoints: latency-svc-cp48d [1.219612583s] May 19 01:03:31.383: INFO: Latencies: [40.077191ms 165.801865ms 177.56476ms 268.608006ms 329.815741ms 405.54585ms 436.328939ms 502.93304ms 569.790418ms 612.371363ms 653.025746ms 704.989212ms 719.51115ms 722.754924ms 727.424012ms 737.453657ms 777.709107ms 783.829281ms 790.305769ms 792.121149ms 794.17139ms 797.117222ms 814.939918ms 818.732119ms 819.713681ms 820.408254ms 821.152195ms 821.34607ms 826.40206ms 827.283317ms 828.099385ms 830.437489ms 831.948417ms 832.181946ms 832.400122ms 843.622258ms 843.939497ms 844.01615ms 844.068097ms 844.13518ms 845.00128ms 847.735552ms 852.742725ms 855.2955ms 858.657896ms 860.259674ms 863.355189ms 869.113976ms 869.15879ms 869.816789ms 874.098784ms 874.606347ms 881.521123ms 885.68398ms 889.491438ms 889.653925ms 889.81278ms 890.247385ms 890.642735ms 891.966253ms 894.379923ms 896.167976ms 898.645595ms 904.152636ms 905.442071ms 907.153454ms 909.016747ms 909.586414ms 915.802321ms 916.263901ms 919.18566ms 920.454642ms 921.58149ms 922.422832ms 923.472048ms 923.641414ms 926.405723ms 927.35848ms 928.190765ms 928.386468ms 928.896778ms 930.529652ms 932.390237ms 933.220321ms 934.521934ms 938.136736ms 940.266732ms 941.196932ms 943.292487ms 945.655186ms 947.438708ms 948.987277ms 951.088808ms 951.551913ms 956.020729ms 956.471747ms 956.875901ms 957.404498ms 958.579939ms 959.274841ms 962.037011ms 962.480373ms 963.744859ms 964.376705ms 969.379715ms 973.316723ms 973.631558ms 976.011873ms 976.019846ms 976.711691ms 982.510438ms 983.038821ms 983.883893ms 984.067613ms 984.266914ms 986.326952ms 992.893274ms 994.530864ms 994.764576ms 994.840891ms 1.003072426s 1.004736718s 1.006141313s 1.010114744s 1.012267258s 1.014627054s 1.014639631s 1.014667164s 1.015555825s 1.020796862s 1.022217808s 1.022537214s 1.024057253s 1.02639155s 1.027630338s 1.029233185s 1.041905696s 1.048043548s 1.05219651s 1.055552588s 1.066413127s 1.067431006s 1.07214695s 1.077229442s 1.078593961s 1.081288793s 1.091306262s 1.092787132s 1.092860836s 1.095232671s 1.098820262s 1.102019699s 1.104290705s 1.107235471s 1.107530393s 1.108599217s 1.10992876s 1.112047479s 1.115878336s 1.119662951s 1.120749172s 1.121721399s 1.123119083s 1.124184724s 1.130050037s 1.130206278s 1.143327657s 1.151496871s 1.152998224s 1.159610781s 1.16177934s 1.163263479s 1.168840478s 1.170276191s 1.17626841s 1.183614236s 1.185316478s 1.19274109s 1.196638841s 1.199005397s 1.203713638s 1.212077155s 1.219612583s 1.21967361s 1.220765395s 1.227227545s 1.240245216s 1.240981344s 1.24282358s 1.257631365s 1.28158538s 1.292457696s 1.293302048s 1.299964071s 1.304225186s 1.33197748s 1.332554716s 1.347768499s 1.350979483s 1.365204262s] May 19 01:03:31.383: INFO: 50 %ile: 962.037011ms May 19 01:03:31.383: INFO: 90 %ile: 1.203713638s May 19 01:03:31.383: INFO: 99 %ile: 1.350979483s May 19 01:03:31.383: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9884" for this suite. • [SLOW TEST:18.792 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":255,"skipped":4254,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:31.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-cf014acb-111e-4996-9842-a7c61c10099b STEP: Creating a pod to test consume secrets May 19 01:03:31.539: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b" in namespace "projected-9455" to be "Succeeded or Failed" May 19 01:03:31.554: INFO: Pod "pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.898987ms May 19 01:03:33.559: INFO: Pod "pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020124236s May 19 01:03:35.563: INFO: Pod "pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024255189s STEP: Saw pod success May 19 01:03:35.563: INFO: Pod "pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b" satisfied condition "Succeeded or Failed" May 19 01:03:35.566: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b container projected-secret-volume-test: STEP: delete the pod May 19 01:03:35.694: INFO: Waiting for pod pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b to disappear May 19 01:03:35.698: INFO: Pod pod-projected-secrets-d2cec044-38f6-44bc-96c9-48eea7ed771b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:35.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9455" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4262,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 19 01:03:35.779: INFO: Waiting up to 5m0s for pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b" in namespace "emptydir-509" to be "Succeeded or Failed" May 19 01:03:35.848: INFO: Pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b": Phase="Pending", Reason="", readiness=false. Elapsed: 69.461709ms May 19 01:03:38.339: INFO: Pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.560478166s May 19 01:03:40.370: INFO: Pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591529428s May 19 01:03:42.392: INFO: Pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.613315728s STEP: Saw pod success May 19 01:03:42.392: INFO: Pod "pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b" satisfied condition "Succeeded or Failed" May 19 01:03:42.397: INFO: Trying to get logs from node latest-worker pod pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b container test-container: STEP: delete the pod May 19 01:03:42.589: INFO: Waiting for pod pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b to disappear May 19 01:03:42.594: INFO: Pod pod-7fb7ba88-dfc7-4a50-9460-dd8d5888592b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:42.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-509" for this suite. • [SLOW TEST:6.986 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:42.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 01:03:42.868: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:43.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2247" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":258,"skipped":4319,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:43.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-d1a6f97c-82a8-4cc3-8824-6daa4c7d83f7 STEP: Creating a pod to test consume configMaps May 19 01:03:44.047: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2" in namespace "projected-3918" to be "Succeeded or Failed" May 19 01:03:44.054: INFO: Pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480527ms May 19 01:03:46.221: INFO: Pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173975081s May 19 01:03:48.237: INFO: Pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.189839574s May 19 01:03:50.260: INFO: Pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212770353s STEP: Saw pod success May 19 01:03:50.260: INFO: Pod "pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2" satisfied condition "Succeeded or Failed" May 19 01:03:50.271: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2 container projected-configmap-volume-test: STEP: delete the pod May 19 01:03:50.385: INFO: Waiting for pod pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2 to disappear May 19 01:03:50.390: INFO: Pod pod-projected-configmaps-536bd9f6-7848-44da-b9c3-c808e56360b2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:50.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3918" for this suite. • [SLOW TEST:6.497 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":259,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:50.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0519 01:03:51.361313 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 01:03:51.361: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:51.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9457" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":260,"skipped":4340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:51.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-1dd1f30d-cc1d-4f00-b926-6322bd3e575b STEP: Creating a pod to test consume secrets May 19 01:03:51.818: INFO: Waiting up to 5m0s for pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036" in namespace "secrets-9204" to be "Succeeded or Failed" May 19 01:03:51.825: INFO: Pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036": Phase="Pending", Reason="", readiness=false. Elapsed: 7.15994ms May 19 01:03:53.993: INFO: Pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174911511s May 19 01:03:56.149: INFO: Pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330450556s May 19 01:03:58.163: INFO: Pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.344618869s STEP: Saw pod success May 19 01:03:58.163: INFO: Pod "pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036" satisfied condition "Succeeded or Failed" May 19 01:03:58.269: INFO: Trying to get logs from node latest-worker pod pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036 container secret-volume-test: STEP: delete the pod May 19 01:03:58.423: INFO: Waiting for pod pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036 to disappear May 19 01:03:58.438: INFO: Pod pod-secrets-89f7cf6e-21bf-427d-9109-76705253d036 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:03:58.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9204" for this suite. • [SLOW TEST:7.030 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4374,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:03:58.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 01:03:59.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 01:04:01.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447039, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447039, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447039, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447039, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 01:04:05.100: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:05.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3216" for this suite. STEP: Destroying namespace "webhook-3216-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.889 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":262,"skipped":4385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:06.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 19 01:04:06.477: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 19 01:04:06.580: INFO: Waiting for terminating namespaces to be deleted... May 19 01:04:06.585: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 19 01:04:06.594: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 19 01:04:06.594: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 19 01:04:06.594: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 19 01:04:06.594: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 19 01:04:06.594: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 01:04:06.594: INFO: Container kindnet-cni ready: true, restart count 0 May 19 01:04:06.594: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 19 01:04:06.594: INFO: Container kube-proxy ready: true, restart count 0 May 19 01:04:06.594: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 19 01:04:06.604: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 19 01:04:06.604: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 19 01:04:06.604: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 19 01:04:06.604: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 19 01:04:06.604: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 01:04:06.604: INFO: Container kindnet-cni ready: true, restart count 0 May 19 01:04:06.604: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 19 01:04:06.604: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2b18d542-b765-47ff-8d0c-d43fb65aee94 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-2b18d542-b765-47ff-8d0c-d43fb65aee94 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2b18d542-b765-47ff-8d0c-d43fb65aee94 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:24.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-378" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.534 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":263,"skipped":4412,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:24.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 01:04:24.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34" in namespace "downward-api-7088" to be "Succeeded or Failed" May 19 01:04:24.987: INFO: Pod "downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34": Phase="Pending", Reason="", readiness=false. Elapsed: 11.116963ms May 19 01:04:26.991: INFO: Pod "downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015014027s May 19 01:04:28.996: INFO: Pod "downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020324703s STEP: Saw pod success May 19 01:04:28.996: INFO: Pod "downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34" satisfied condition "Succeeded or Failed" May 19 01:04:29.000: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34 container client-container: STEP: delete the pod May 19 01:04:29.091: INFO: Waiting for pod downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34 to disappear May 19 01:04:29.094: INFO: Pod downwardapi-volume-a6190b91-195e-4047-a017-cf2800aeee34 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:29.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7088" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":264,"skipped":4419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:29.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-61072d8a-1eb3-4d40-ac43-a9b3d4150885 STEP: Creating a pod to test consume secrets May 19 01:04:29.168: INFO: Waiting up to 5m0s for pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca" in namespace "secrets-1058" to be "Succeeded or Failed" May 19 01:04:29.185: INFO: Pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.872936ms May 19 01:04:31.188: INFO: Pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020030383s May 19 01:04:33.208: INFO: Pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040138201s May 19 01:04:35.220: INFO: Pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052015508s STEP: Saw pod success May 19 01:04:35.220: INFO: Pod "pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca" satisfied condition "Succeeded or Failed" May 19 01:04:35.223: INFO: Trying to get logs from node latest-worker pod pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca container secret-volume-test: STEP: delete the pod May 19 01:04:35.257: INFO: Waiting for pod pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca to disappear May 19 01:04:35.268: INFO: Pod pod-secrets-71748742-1a6b-44da-8525-abc3f39b5fca no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:35.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1058" for this suite. • [SLOW TEST:6.174 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":265,"skipped":4455,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:35.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:41.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7047" for this suite. STEP: Destroying namespace "nsdeletetest-9228" for this suite. May 19 01:04:41.598: INFO: Namespace nsdeletetest-9228 was already deleted STEP: Destroying namespace "nsdeletetest-1655" for this suite. • [SLOW TEST:6.326 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":266,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:41.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:41.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8217" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":267,"skipped":4476,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:41.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 01:04:46.338: INFO: Successfully updated pod "annotationupdateb4b992ee-3dbd-4299-b5c4-594dd9ad0586" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:48.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5912" for this suite. • [SLOW TEST:6.743 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4490,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:48.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 19 01:04:49.030: INFO: created pod pod-service-account-defaultsa May 19 01:04:49.030: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 19 01:04:49.042: INFO: created pod pod-service-account-mountsa May 19 01:04:49.042: INFO: pod pod-service-account-mountsa service account token volume mount: true May 19 01:04:49.089: INFO: created pod pod-service-account-nomountsa May 19 01:04:49.089: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 19 01:04:49.102: INFO: created pod pod-service-account-defaultsa-mountspec May 19 01:04:49.102: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 19 01:04:49.127: INFO: created pod pod-service-account-mountsa-mountspec May 19 01:04:49.127: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 19 01:04:49.165: INFO: created pod pod-service-account-nomountsa-mountspec May 19 01:04:49.165: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 19 01:04:49.251: INFO: created pod pod-service-account-defaultsa-nomountspec May 19 01:04:49.251: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 19 01:04:49.266: INFO: created pod pod-service-account-mountsa-nomountspec May 19 01:04:49.266: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 19 01:04:49.302: INFO: created pod pod-service-account-nomountsa-nomountspec May 19 01:04:49.302: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:04:49.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3222" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":269,"skipped":4492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:04:49.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-836ee46b-9eaf-4104-be19-4a6b3dff0576 STEP: Creating a pod to test consume secrets May 19 01:04:49.529: INFO: Waiting up to 5m0s for pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b" in namespace "secrets-7151" to be "Succeeded or Failed" May 19 01:04:49.533: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608424ms May 19 01:04:51.873: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344584948s May 19 01:04:53.958: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429522574s May 19 01:04:56.107: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.578215348s May 19 01:04:58.324: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795656873s May 19 01:05:00.370: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.841695307s May 19 01:05:02.532: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.003293611s STEP: Saw pod success May 19 01:05:02.532: INFO: Pod "pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b" satisfied condition "Succeeded or Failed" May 19 01:05:02.535: INFO: Trying to get logs from node latest-worker pod pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b container secret-env-test: STEP: delete the pod May 19 01:05:02.999: INFO: Waiting for pod pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b to disappear May 19 01:05:03.002: INFO: Pod pod-secrets-94e21bb6-c401-4cd7-9b9e-5f8ebee30a6b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:05:03.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7151" for this suite. • [SLOW TEST:13.657 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4533,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:05:03.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 01:05:03.962: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 01:05:06.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6311 create -f -' May 19 01:05:10.480: INFO: stderr: "" May 19 01:05:10.480: INFO: stdout: "e2e-test-crd-publish-openapi-2665-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 01:05:10.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6311 delete e2e-test-crd-publish-openapi-2665-crds test-cr' May 19 01:05:10.598: INFO: stderr: "" May 19 01:05:10.598: INFO: stdout: "e2e-test-crd-publish-openapi-2665-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 19 01:05:10.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6311 apply -f -' May 19 01:05:10.882: INFO: stderr: "" May 19 01:05:10.882: INFO: stdout: "e2e-test-crd-publish-openapi-2665-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 19 01:05:10.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6311 delete e2e-test-crd-publish-openapi-2665-crds test-cr' May 19 01:05:10.992: INFO: stderr: "" May 19 01:05:10.992: INFO: stdout: "e2e-test-crd-publish-openapi-2665-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 19 01:05:10.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2665-crds' May 19 01:05:11.292: INFO: stderr: "" May 19 01:05:11.292: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2665-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:05:14.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6311" for this suite. • [SLOW TEST:11.184 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":271,"skipped":4551,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:05:14.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-980 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 19 01:05:14.358: INFO: Found 0 stateful pods, waiting for 3 May 19 01:05:24.364: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 01:05:24.364: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 01:05:24.364: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 01:05:34.363: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 01:05:34.363: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 01:05:34.363: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 19 01:05:34.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-980 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 01:05:34.666: INFO: stderr: "I0519 01:05:34.527920 3972 log.go:172] (0xc0009ed8c0) (0xc000765040) Create stream\nI0519 01:05:34.527995 3972 log.go:172] (0xc0009ed8c0) (0xc000765040) Stream added, broadcasting: 1\nI0519 01:05:34.532893 3972 log.go:172] (0xc0009ed8c0) Reply frame received for 1\nI0519 01:05:34.532947 3972 log.go:172] (0xc0009ed8c0) (0xc000759cc0) Create stream\nI0519 01:05:34.532967 3972 log.go:172] (0xc0009ed8c0) (0xc000759cc0) Stream added, broadcasting: 3\nI0519 01:05:34.534168 3972 log.go:172] (0xc0009ed8c0) Reply frame received for 3\nI0519 01:05:34.534209 3972 log.go:172] (0xc0009ed8c0) (0xc000732dc0) Create stream\nI0519 01:05:34.534219 3972 log.go:172] (0xc0009ed8c0) (0xc000732dc0) Stream added, broadcasting: 5\nI0519 01:05:34.535314 3972 log.go:172] (0xc0009ed8c0) Reply frame received for 5\nI0519 01:05:34.622199 3972 log.go:172] (0xc0009ed8c0) Data frame received for 5\nI0519 01:05:34.622256 3972 log.go:172] (0xc000732dc0) (5) Data frame handling\nI0519 01:05:34.622282 3972 log.go:172] (0xc000732dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 01:05:34.658660 3972 log.go:172] (0xc0009ed8c0) Data frame received for 3\nI0519 01:05:34.658707 3972 log.go:172] (0xc000759cc0) (3) Data frame handling\nI0519 01:05:34.658722 3972 log.go:172] (0xc000759cc0) (3) Data frame sent\nI0519 01:05:34.658746 3972 log.go:172] (0xc0009ed8c0) Data frame received for 3\nI0519 01:05:34.658773 3972 log.go:172] (0xc000759cc0) (3) Data frame handling\nI0519 01:05:34.658808 3972 log.go:172] (0xc0009ed8c0) Data frame received for 5\nI0519 01:05:34.658841 3972 log.go:172] (0xc000732dc0) (5) Data frame handling\nI0519 01:05:34.660894 3972 log.go:172] (0xc0009ed8c0) Data frame received for 1\nI0519 01:05:34.660933 3972 log.go:172] (0xc000765040) (1) Data frame handling\nI0519 01:05:34.660964 3972 log.go:172] (0xc000765040) (1) Data frame sent\nI0519 01:05:34.660997 3972 log.go:172] (0xc0009ed8c0) (0xc000765040) Stream removed, broadcasting: 1\nI0519 01:05:34.661038 3972 log.go:172] (0xc0009ed8c0) Go away received\nI0519 01:05:34.661658 3972 log.go:172] (0xc0009ed8c0) (0xc000765040) Stream removed, broadcasting: 1\nI0519 01:05:34.661688 3972 log.go:172] (0xc0009ed8c0) (0xc000759cc0) Stream removed, broadcasting: 3\nI0519 01:05:34.661702 3972 log.go:172] (0xc0009ed8c0) (0xc000732dc0) Stream removed, broadcasting: 5\n" May 19 01:05:34.666: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 01:05:34.666: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 19 01:05:44.696: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 19 01:05:54.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-980 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 01:05:54.979: INFO: stderr: "I0519 01:05:54.883620 3994 log.go:172] (0xc00095c790) (0xc0003ffae0) Create stream\nI0519 01:05:54.883687 3994 log.go:172] (0xc00095c790) (0xc0003ffae0) Stream added, broadcasting: 1\nI0519 01:05:54.888048 3994 log.go:172] (0xc00095c790) Reply frame received for 1\nI0519 01:05:54.888089 3994 log.go:172] (0xc00095c790) (0xc000621d60) Create stream\nI0519 01:05:54.888109 3994 log.go:172] (0xc00095c790) (0xc000621d60) Stream added, broadcasting: 3\nI0519 01:05:54.888844 3994 log.go:172] (0xc00095c790) Reply frame received for 3\nI0519 01:05:54.888878 3994 log.go:172] (0xc00095c790) (0xc00051ab40) Create stream\nI0519 01:05:54.888890 3994 log.go:172] (0xc00095c790) (0xc00051ab40) Stream added, broadcasting: 5\nI0519 01:05:54.889689 3994 log.go:172] (0xc00095c790) Reply frame received for 5\nI0519 01:05:54.973822 3994 log.go:172] (0xc00095c790) Data frame received for 5\nI0519 01:05:54.973850 3994 log.go:172] (0xc00051ab40) (5) Data frame handling\nI0519 01:05:54.973868 3994 log.go:172] (0xc00051ab40) (5) Data frame sent\nI0519 01:05:54.973879 3994 log.go:172] (0xc00095c790) Data frame received for 5\nI0519 01:05:54.973893 3994 log.go:172] (0xc00051ab40) (5) Data frame handling\nI0519 01:05:54.973905 3994 log.go:172] (0xc00095c790) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 01:05:54.973913 3994 log.go:172] (0xc000621d60) (3) Data frame handling\nI0519 01:05:54.973957 3994 log.go:172] (0xc000621d60) (3) Data frame sent\nI0519 01:05:54.973971 3994 log.go:172] (0xc00095c790) Data frame received for 3\nI0519 01:05:54.973983 3994 log.go:172] (0xc000621d60) (3) Data frame handling\nI0519 01:05:54.975402 3994 log.go:172] (0xc00095c790) Data frame received for 1\nI0519 01:05:54.975437 3994 log.go:172] (0xc0003ffae0) (1) Data frame handling\nI0519 01:05:54.975448 3994 log.go:172] (0xc0003ffae0) (1) Data frame sent\nI0519 01:05:54.975461 3994 log.go:172] (0xc00095c790) (0xc0003ffae0) Stream removed, broadcasting: 1\nI0519 01:05:54.975495 3994 log.go:172] (0xc00095c790) Go away received\nI0519 01:05:54.975749 3994 log.go:172] (0xc00095c790) (0xc0003ffae0) Stream removed, broadcasting: 1\nI0519 01:05:54.975760 3994 log.go:172] (0xc00095c790) (0xc000621d60) Stream removed, broadcasting: 3\nI0519 01:05:54.975766 3994 log.go:172] (0xc00095c790) (0xc00051ab40) Stream removed, broadcasting: 5\n" May 19 01:05:54.979: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 01:05:54.979: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 01:06:14.998: INFO: Waiting for StatefulSet statefulset-980/ss2 to complete update STEP: Rolling back to a previous revision May 19 01:06:25.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-980 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 19 01:06:25.277: INFO: stderr: "I0519 01:06:25.150222 4014 log.go:172] (0xc0009c6000) (0xc0006528c0) Create stream\nI0519 01:06:25.150292 4014 log.go:172] (0xc0009c6000) (0xc0006528c0) Stream added, broadcasting: 1\nI0519 01:06:25.154429 4014 log.go:172] (0xc0009c6000) Reply frame received for 1\nI0519 01:06:25.154469 4014 log.go:172] (0xc0009c6000) (0xc0005fe5a0) Create stream\nI0519 01:06:25.154477 4014 log.go:172] (0xc0009c6000) (0xc0005fe5a0) Stream added, broadcasting: 3\nI0519 01:06:25.155358 4014 log.go:172] (0xc0009c6000) Reply frame received for 3\nI0519 01:06:25.155407 4014 log.go:172] (0xc0009c6000) (0xc000689a40) Create stream\nI0519 01:06:25.155420 4014 log.go:172] (0xc0009c6000) (0xc000689a40) Stream added, broadcasting: 5\nI0519 01:06:25.156187 4014 log.go:172] (0xc0009c6000) Reply frame received for 5\nI0519 01:06:25.225079 4014 log.go:172] (0xc0009c6000) Data frame received for 5\nI0519 01:06:25.225101 4014 log.go:172] (0xc000689a40) (5) Data frame handling\nI0519 01:06:25.225239 4014 log.go:172] (0xc000689a40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0519 01:06:25.268020 4014 log.go:172] (0xc0009c6000) Data frame received for 3\nI0519 01:06:25.268047 4014 log.go:172] (0xc0005fe5a0) (3) Data frame handling\nI0519 01:06:25.268064 4014 log.go:172] (0xc0005fe5a0) (3) Data frame sent\nI0519 01:06:25.268073 4014 log.go:172] (0xc0009c6000) Data frame received for 3\nI0519 01:06:25.268080 4014 log.go:172] (0xc0005fe5a0) (3) Data frame handling\nI0519 01:06:25.268227 4014 log.go:172] (0xc0009c6000) Data frame received for 5\nI0519 01:06:25.268269 4014 log.go:172] (0xc000689a40) (5) Data frame handling\nI0519 01:06:25.270797 4014 log.go:172] (0xc0009c6000) Data frame received for 1\nI0519 01:06:25.270826 4014 log.go:172] (0xc0006528c0) (1) Data frame handling\nI0519 01:06:25.270852 4014 log.go:172] (0xc0006528c0) (1) Data frame sent\nI0519 01:06:25.270874 4014 log.go:172] (0xc0009c6000) (0xc0006528c0) Stream removed, broadcasting: 1\nI0519 01:06:25.270901 4014 log.go:172] (0xc0009c6000) Go away received\nI0519 01:06:25.271338 4014 log.go:172] (0xc0009c6000) (0xc0006528c0) Stream removed, broadcasting: 1\nI0519 01:06:25.271363 4014 log.go:172] (0xc0009c6000) (0xc0005fe5a0) Stream removed, broadcasting: 3\nI0519 01:06:25.271377 4014 log.go:172] (0xc0009c6000) (0xc000689a40) Stream removed, broadcasting: 5\n" May 19 01:06:25.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 19 01:06:25.278: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 19 01:06:35.322: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 19 01:06:45.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-980 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 19 01:06:45.563: INFO: stderr: "I0519 01:06:45.491400 4033 log.go:172] (0xc000a3c000) (0xc000378280) Create stream\nI0519 01:06:45.491461 4033 log.go:172] (0xc000a3c000) (0xc000378280) Stream added, broadcasting: 1\nI0519 01:06:45.494545 4033 log.go:172] (0xc000a3c000) Reply frame received for 1\nI0519 01:06:45.494596 4033 log.go:172] (0xc000a3c000) (0xc00024e0a0) Create stream\nI0519 01:06:45.494612 4033 log.go:172] (0xc000a3c000) (0xc00024e0a0) Stream added, broadcasting: 3\nI0519 01:06:45.495570 4033 log.go:172] (0xc000a3c000) Reply frame received for 3\nI0519 01:06:45.495599 4033 log.go:172] (0xc000a3c000) (0xc000378e60) Create stream\nI0519 01:06:45.495611 4033 log.go:172] (0xc000a3c000) (0xc000378e60) Stream added, broadcasting: 5\nI0519 01:06:45.496364 4033 log.go:172] (0xc000a3c000) Reply frame received for 5\nI0519 01:06:45.555661 4033 log.go:172] (0xc000a3c000) Data frame received for 3\nI0519 01:06:45.555720 4033 log.go:172] (0xc000a3c000) Data frame received for 5\nI0519 01:06:45.555764 4033 log.go:172] (0xc000378e60) (5) Data frame handling\nI0519 01:06:45.555792 4033 log.go:172] (0xc000378e60) (5) Data frame sent\nI0519 01:06:45.555812 4033 log.go:172] (0xc000a3c000) Data frame received for 5\nI0519 01:06:45.555831 4033 log.go:172] (0xc000378e60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0519 01:06:45.555866 4033 log.go:172] (0xc00024e0a0) (3) Data frame handling\nI0519 01:06:45.555922 4033 log.go:172] (0xc00024e0a0) (3) Data frame sent\nI0519 01:06:45.555951 4033 log.go:172] (0xc000a3c000) Data frame received for 3\nI0519 01:06:45.555965 4033 log.go:172] (0xc00024e0a0) (3) Data frame handling\nI0519 01:06:45.557442 4033 log.go:172] (0xc000a3c000) Data frame received for 1\nI0519 01:06:45.557466 4033 log.go:172] (0xc000378280) (1) Data frame handling\nI0519 01:06:45.557489 4033 log.go:172] (0xc000378280) (1) Data frame sent\nI0519 01:06:45.557504 4033 log.go:172] (0xc000a3c000) (0xc000378280) Stream removed, broadcasting: 1\nI0519 01:06:45.557671 4033 log.go:172] (0xc000a3c000) Go away received\nI0519 01:06:45.557801 4033 log.go:172] (0xc000a3c000) (0xc000378280) Stream removed, broadcasting: 1\nI0519 01:06:45.557828 4033 log.go:172] (0xc000a3c000) (0xc00024e0a0) Stream removed, broadcasting: 3\nI0519 01:06:45.557844 4033 log.go:172] (0xc000a3c000) (0xc000378e60) Stream removed, broadcasting: 5\n" May 19 01:06:45.563: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 19 01:06:45.563: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 19 01:07:15.586: INFO: Waiting for StatefulSet statefulset-980/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 19 01:07:25.595: INFO: Deleting all statefulset in ns statefulset-980 May 19 01:07:25.598: INFO: Scaling statefulset ss2 to 0 May 19 01:07:45.629: INFO: Waiting for statefulset status.replicas updated to 0 May 19 01:07:45.632: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:07:45.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-980" for this suite. • [SLOW TEST:151.424 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":272,"skipped":4565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:07:45.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 19 01:07:49.788: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:07:49.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4154" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4589,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:07:49.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 01:07:49.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f" in namespace "projected-3391" to be "Succeeded or Failed" May 19 01:07:49.966: INFO: Pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.687699ms May 19 01:07:52.097: INFO: Pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148718726s May 19 01:07:54.102: INFO: Pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f": Phase="Running", Reason="", readiness=true. Elapsed: 4.153500167s May 19 01:07:56.107: INFO: Pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.158562384s STEP: Saw pod success May 19 01:07:56.107: INFO: Pod "downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f" satisfied condition "Succeeded or Failed" May 19 01:07:56.110: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f container client-container: STEP: delete the pod May 19 01:07:56.189: INFO: Waiting for pod downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f to disappear May 19 01:07:56.196: INFO: Pod downwardapi-volume-c67eeb3e-35f1-4834-81bb-a66ecd088d7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:07:56.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3391" for this suite. • [SLOW TEST:6.399 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4610,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:07:56.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 19 01:07:56.265: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 19 01:07:59.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8824 create -f -' May 19 01:08:02.503: INFO: stderr: "" May 19 01:08:02.503: INFO: stdout: "e2e-test-crd-publish-openapi-3264-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 19 01:08:02.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8824 delete e2e-test-crd-publish-openapi-3264-crds test-cr' May 19 01:08:02.626: INFO: stderr: "" May 19 01:08:02.626: INFO: stdout: "e2e-test-crd-publish-openapi-3264-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 19 01:08:02.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8824 apply -f -' May 19 01:08:02.878: INFO: stderr: "" May 19 01:08:02.878: INFO: stdout: "e2e-test-crd-publish-openapi-3264-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 19 01:08:02.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8824 delete e2e-test-crd-publish-openapi-3264-crds test-cr' May 19 01:08:02.998: INFO: stderr: "" May 19 01:08:02.998: INFO: stdout: "e2e-test-crd-publish-openapi-3264-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 19 01:08:02.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3264-crds' May 19 01:08:03.273: INFO: stderr: "" May 19 01:08:03.273: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3264-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:08:05.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8824" for this suite. • [SLOW TEST:8.973 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":275,"skipped":4612,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:08:05.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 01:08:05.655: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 01:08:07.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 01:08:09.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447285, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 01:08:12.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:08:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5129" for this suite. STEP: Destroying namespace "webhook-5129-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.645 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":276,"skipped":4617,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:08:12.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:08:44.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8987" for this suite. STEP: Destroying namespace "nsdeletetest-744" for this suite. May 19 01:08:44.114: INFO: Namespace nsdeletetest-744 was already deleted STEP: Destroying namespace "nsdeletetest-3404" for this suite. • [SLOW TEST:31.280 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":277,"skipped":4620,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:08:44.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2c669233-0ca8-4230-856d-64721add38d4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2c669233-0ca8-4230-856d-64721add38d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:08:50.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5055" for this suite. • [SLOW TEST:6.136 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:08:50.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 19 01:08:54.936: INFO: Successfully updated pod "labelsupdate90b6f693-dabf-429d-82b5-e24752d04c9d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:08:58.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5443" for this suite. • [SLOW TEST:8.730 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4651,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:08:58.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:09:03.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":280,"skipped":4654,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:09:03.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 19 01:09:03.262: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 19 01:09:03.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:03.750: INFO: stderr: "" May 19 01:09:03.750: INFO: stdout: "service/agnhost-slave created\n" May 19 01:09:03.750: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 19 01:09:03.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:04.449: INFO: stderr: "" May 19 01:09:04.449: INFO: stdout: "service/agnhost-master created\n" May 19 01:09:04.449: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 19 01:09:04.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:04.806: INFO: stderr: "" May 19 01:09:04.806: INFO: stdout: "service/frontend created\n" May 19 01:09:04.806: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 19 01:09:04.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:05.340: INFO: stderr: "" May 19 01:09:05.340: INFO: stdout: "deployment.apps/frontend created\n" May 19 01:09:05.340: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 01:09:05.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:05.640: INFO: stderr: "" May 19 01:09:05.640: INFO: stdout: "deployment.apps/agnhost-master created\n" May 19 01:09:05.640: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 19 01:09:05.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9893' May 19 01:09:05.935: INFO: stderr: "" May 19 01:09:05.935: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 19 01:09:05.935: INFO: Waiting for all frontend pods to be Running. May 19 01:09:15.985: INFO: Waiting for frontend to serve content. May 19 01:09:15.998: INFO: Trying to add a new entry to the guestbook. May 19 01:09:16.009: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 19 01:09:16.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:16.159: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:16.159: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 19 01:09:16.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:16.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:16.417: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 01:09:16.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:16.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:16.578: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 01:09:16.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:16.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:16.679: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 19 01:09:16.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:16.813: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:16.813: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 19 01:09:16.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9893' May 19 01:09:17.548: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 19 01:09:17.548: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:09:17.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9893" for this suite. • [SLOW TEST:14.396 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":281,"skipped":4676,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:09:17.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-3079 STEP: creating replication controller nodeport-test in namespace services-3079 I0519 01:09:18.892769 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3079, replica count: 2 I0519 01:09:21.943153 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 01:09:24.943350 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 19 01:09:24.943: INFO: Creating new exec pod May 19 01:09:29.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3079 execpod56vgs -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 19 01:09:30.203: INFO: stderr: "I0519 01:09:30.118396 4415 log.go:172] (0xc00003a0b0) (0xc000aa4820) Create stream\nI0519 01:09:30.118478 4415 log.go:172] (0xc00003a0b0) (0xc000aa4820) Stream added, broadcasting: 1\nI0519 01:09:30.123778 4415 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0519 01:09:30.123825 4415 log.go:172] (0xc00003a0b0) (0xc0004fe320) Create stream\nI0519 01:09:30.123836 4415 log.go:172] (0xc00003a0b0) (0xc0004fe320) Stream added, broadcasting: 3\nI0519 01:09:30.124690 4415 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0519 01:09:30.124731 4415 log.go:172] (0xc00003a0b0) (0xc0004ff2c0) Create stream\nI0519 01:09:30.124739 4415 log.go:172] (0xc00003a0b0) (0xc0004ff2c0) Stream added, broadcasting: 5\nI0519 01:09:30.125644 4415 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0519 01:09:30.196390 4415 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 01:09:30.196422 4415 log.go:172] (0xc0004ff2c0) (5) Data frame handling\nI0519 01:09:30.196440 4415 log.go:172] (0xc0004ff2c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0519 01:09:30.197275 4415 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 01:09:30.197299 4415 log.go:172] (0xc0004ff2c0) (5) Data frame handling\nI0519 01:09:30.197309 4415 log.go:172] (0xc0004ff2c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0519 01:09:30.198045 4415 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0519 01:09:30.198067 4415 log.go:172] (0xc0004fe320) (3) Data frame handling\nI0519 01:09:30.198087 4415 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0519 01:09:30.198119 4415 log.go:172] (0xc0004ff2c0) (5) Data frame handling\nI0519 01:09:30.199385 4415 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0519 01:09:30.199406 4415 log.go:172] (0xc000aa4820) (1) Data frame handling\nI0519 01:09:30.199418 4415 log.go:172] (0xc000aa4820) (1) Data frame sent\nI0519 01:09:30.199436 4415 log.go:172] (0xc00003a0b0) (0xc000aa4820) Stream removed, broadcasting: 1\nI0519 01:09:30.199463 4415 log.go:172] (0xc00003a0b0) Go away received\nI0519 01:09:30.199720 4415 log.go:172] (0xc00003a0b0) (0xc000aa4820) Stream removed, broadcasting: 1\nI0519 01:09:30.199733 4415 log.go:172] (0xc00003a0b0) (0xc0004fe320) Stream removed, broadcasting: 3\nI0519 01:09:30.199740 4415 log.go:172] (0xc00003a0b0) (0xc0004ff2c0) Stream removed, broadcasting: 5\n" May 19 01:09:30.204: INFO: stdout: "" May 19 01:09:30.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3079 execpod56vgs -- /bin/sh -x -c nc -zv -t -w 2 10.105.6.221 80' May 19 01:09:30.400: INFO: stderr: "I0519 01:09:30.336813 4436 log.go:172] (0xc000aad4a0) (0xc000ba65a0) Create stream\nI0519 01:09:30.336869 4436 log.go:172] (0xc000aad4a0) (0xc000ba65a0) Stream added, broadcasting: 1\nI0519 01:09:30.341526 4436 log.go:172] (0xc000aad4a0) Reply frame received for 1\nI0519 01:09:30.341567 4436 log.go:172] (0xc000aad4a0) (0xc0000f30e0) Create stream\nI0519 01:09:30.341580 4436 log.go:172] (0xc000aad4a0) (0xc0000f30e0) Stream added, broadcasting: 3\nI0519 01:09:30.342861 4436 log.go:172] (0xc000aad4a0) Reply frame received for 3\nI0519 01:09:30.342931 4436 log.go:172] (0xc000aad4a0) (0xc0006e4000) Create stream\nI0519 01:09:30.342958 4436 log.go:172] (0xc000aad4a0) (0xc0006e4000) Stream added, broadcasting: 5\nI0519 01:09:30.343990 4436 log.go:172] (0xc000aad4a0) Reply frame received for 5\nI0519 01:09:30.393342 4436 log.go:172] (0xc000aad4a0) Data frame received for 3\nI0519 01:09:30.393388 4436 log.go:172] (0xc0000f30e0) (3) Data frame handling\nI0519 01:09:30.393414 4436 log.go:172] (0xc000aad4a0) Data frame received for 5\nI0519 01:09:30.393423 4436 log.go:172] (0xc0006e4000) (5) Data frame handling\nI0519 01:09:30.393430 4436 log.go:172] (0xc0006e4000) (5) Data frame sent\nI0519 01:09:30.393436 4436 log.go:172] (0xc000aad4a0) Data frame received for 5\nI0519 01:09:30.393445 4436 log.go:172] (0xc0006e4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.6.221 80\nConnection to 10.105.6.221 80 port [tcp/http] succeeded!\nI0519 01:09:30.394872 4436 log.go:172] (0xc000aad4a0) Data frame received for 1\nI0519 01:09:30.394889 4436 log.go:172] (0xc000ba65a0) (1) Data frame handling\nI0519 01:09:30.394910 4436 log.go:172] (0xc000ba65a0) (1) Data frame sent\nI0519 01:09:30.394926 4436 log.go:172] (0xc000aad4a0) (0xc000ba65a0) Stream removed, broadcasting: 1\nI0519 01:09:30.394943 4436 log.go:172] (0xc000aad4a0) Go away received\nI0519 01:09:30.395278 4436 log.go:172] (0xc000aad4a0) (0xc000ba65a0) Stream removed, broadcasting: 1\nI0519 01:09:30.395304 4436 log.go:172] (0xc000aad4a0) (0xc0000f30e0) Stream removed, broadcasting: 3\nI0519 01:09:30.395316 4436 log.go:172] (0xc000aad4a0) (0xc0006e4000) Stream removed, broadcasting: 5\n" May 19 01:09:30.400: INFO: stdout: "" May 19 01:09:30.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3079 execpod56vgs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30873' May 19 01:09:30.624: INFO: stderr: "I0519 01:09:30.539807 4456 log.go:172] (0xc000a7ba20) (0xc0009b85a0) Create stream\nI0519 01:09:30.539857 4456 log.go:172] (0xc000a7ba20) (0xc0009b85a0) Stream added, broadcasting: 1\nI0519 01:09:30.544964 4456 log.go:172] (0xc000a7ba20) Reply frame received for 1\nI0519 01:09:30.545013 4456 log.go:172] (0xc000a7ba20) (0xc00075ce60) Create stream\nI0519 01:09:30.545024 4456 log.go:172] (0xc000a7ba20) (0xc00075ce60) Stream added, broadcasting: 3\nI0519 01:09:30.546197 4456 log.go:172] (0xc000a7ba20) Reply frame received for 3\nI0519 01:09:30.546228 4456 log.go:172] (0xc000a7ba20) (0xc0006f2c80) Create stream\nI0519 01:09:30.546243 4456 log.go:172] (0xc000a7ba20) (0xc0006f2c80) Stream added, broadcasting: 5\nI0519 01:09:30.547084 4456 log.go:172] (0xc000a7ba20) Reply frame received for 5\nI0519 01:09:30.616562 4456 log.go:172] (0xc000a7ba20) Data frame received for 5\nI0519 01:09:30.616620 4456 log.go:172] (0xc0006f2c80) (5) Data frame handling\nI0519 01:09:30.616633 4456 log.go:172] (0xc0006f2c80) (5) Data frame sent\nI0519 01:09:30.616643 4456 log.go:172] (0xc000a7ba20) Data frame received for 5\nI0519 01:09:30.616651 4456 log.go:172] (0xc0006f2c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30873\nConnection to 172.17.0.13 30873 port [tcp/30873] succeeded!\nI0519 01:09:30.616688 4456 log.go:172] (0xc000a7ba20) Data frame received for 3\nI0519 01:09:30.616707 4456 log.go:172] (0xc00075ce60) (3) Data frame handling\nI0519 01:09:30.618294 4456 log.go:172] (0xc000a7ba20) Data frame received for 1\nI0519 01:09:30.618339 4456 log.go:172] (0xc0009b85a0) (1) Data frame handling\nI0519 01:09:30.618354 4456 log.go:172] (0xc0009b85a0) (1) Data frame sent\nI0519 01:09:30.618372 4456 log.go:172] (0xc000a7ba20) (0xc0009b85a0) Stream removed, broadcasting: 1\nI0519 01:09:30.618386 4456 log.go:172] (0xc000a7ba20) Go away received\nI0519 01:09:30.618824 4456 log.go:172] (0xc000a7ba20) (0xc0009b85a0) Stream removed, broadcasting: 1\nI0519 01:09:30.618843 4456 log.go:172] (0xc000a7ba20) (0xc00075ce60) Stream removed, broadcasting: 3\nI0519 01:09:30.618854 4456 log.go:172] (0xc000a7ba20) (0xc0006f2c80) Stream removed, broadcasting: 5\n" May 19 01:09:30.624: INFO: stdout: "" May 19 01:09:30.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3079 execpod56vgs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30873' May 19 01:09:30.852: INFO: stderr: "I0519 01:09:30.757768 4476 log.go:172] (0xc00003be40) (0xc000159a40) Create stream\nI0519 01:09:30.757846 4476 log.go:172] (0xc00003be40) (0xc000159a40) Stream added, broadcasting: 1\nI0519 01:09:30.768514 4476 log.go:172] (0xc00003be40) Reply frame received for 1\nI0519 01:09:30.768569 4476 log.go:172] (0xc00003be40) (0xc00025a280) Create stream\nI0519 01:09:30.768579 4476 log.go:172] (0xc00003be40) (0xc00025a280) Stream added, broadcasting: 3\nI0519 01:09:30.774804 4476 log.go:172] (0xc00003be40) Reply frame received for 3\nI0519 01:09:30.774836 4476 log.go:172] (0xc00003be40) (0xc0006b4a00) Create stream\nI0519 01:09:30.774845 4476 log.go:172] (0xc00003be40) (0xc0006b4a00) Stream added, broadcasting: 5\nI0519 01:09:30.776001 4476 log.go:172] (0xc00003be40) Reply frame received for 5\nI0519 01:09:30.845355 4476 log.go:172] (0xc00003be40) Data frame received for 5\nI0519 01:09:30.845404 4476 log.go:172] (0xc0006b4a00) (5) Data frame handling\nI0519 01:09:30.845425 4476 log.go:172] (0xc0006b4a00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30873\nConnection to 172.17.0.12 30873 port [tcp/30873] succeeded!\nI0519 01:09:30.845446 4476 log.go:172] (0xc00003be40) Data frame received for 3\nI0519 01:09:30.845457 4476 log.go:172] (0xc00025a280) (3) Data frame handling\nI0519 01:09:30.845553 4476 log.go:172] (0xc00003be40) Data frame received for 5\nI0519 01:09:30.845580 4476 log.go:172] (0xc0006b4a00) (5) Data frame handling\nI0519 01:09:30.847061 4476 log.go:172] (0xc00003be40) Data frame received for 1\nI0519 01:09:30.847083 4476 log.go:172] (0xc000159a40) (1) Data frame handling\nI0519 01:09:30.847094 4476 log.go:172] (0xc000159a40) (1) Data frame sent\nI0519 01:09:30.847108 4476 log.go:172] (0xc00003be40) (0xc000159a40) Stream removed, broadcasting: 1\nI0519 01:09:30.847147 4476 log.go:172] (0xc00003be40) Go away received\nI0519 01:09:30.847409 4476 log.go:172] (0xc00003be40) (0xc000159a40) Stream removed, broadcasting: 1\nI0519 01:09:30.847424 4476 log.go:172] (0xc00003be40) (0xc00025a280) Stream removed, broadcasting: 3\nI0519 01:09:30.847435 4476 log.go:172] (0xc00003be40) (0xc0006b4a00) Stream removed, broadcasting: 5\n" May 19 01:09:30.853: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:09:30.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3079" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:13.284 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":282,"skipped":4685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:09:30.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 19 01:09:31.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 19 01:09:33.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447371, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447371, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447371, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725447371, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 19 01:09:36.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:09:36.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1856" for this suite. STEP: Destroying namespace "webhook-1856-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.391 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":283,"skipped":4711,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:09:37.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 19 01:09:37.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d" in namespace "projected-4890" to be "Succeeded or Failed" May 19 01:09:37.446: INFO: Pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.335448ms May 19 01:09:39.487: INFO: Pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074165105s May 19 01:09:41.491: INFO: Pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d": Phase="Running", Reason="", readiness=true. Elapsed: 4.079016055s May 19 01:09:43.496: INFO: Pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083194668s STEP: Saw pod success May 19 01:09:43.496: INFO: Pod "downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d" satisfied condition "Succeeded or Failed" May 19 01:09:43.498: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d container client-container: STEP: delete the pod May 19 01:09:43.528: INFO: Waiting for pod downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d to disappear May 19 01:09:43.629: INFO: Pod downwardapi-volume-126fdcd4-1078-4383-acd3-24d26df3b91d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:09:43.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4890" for this suite. • [SLOW TEST:6.468 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4712,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:09:43.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0519 01:10:24.841048 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 19 01:10:24.841: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:10:24.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4570" for this suite. • [SLOW TEST:41.126 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":285,"skipped":4733,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:10:24.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.103.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.103.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.103.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.103.175_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3155.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3155.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3155.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3155.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3155.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 175.103.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.103.175_udp@PTR;check="$$(dig +tcp +noall +answer +search 175.103.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.103.175_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 19 01:10:33.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:33.900: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:34.135: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:34.151: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:34.972: INFO: Unable to read jessie_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:35.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:35.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:35.012: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:35.420: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_udp@dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_udp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local] May 19 01:10:40.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.433: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.436: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.463: INFO: Unable to read jessie_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.467: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.469: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:40.529: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_udp@dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_udp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local] May 19 01:10:45.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.434: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.453: INFO: Unable to read jessie_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.456: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.458: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.461: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:45.477: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_udp@dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_udp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local] May 19 01:10:50.425: INFO: Unable to read wheezy_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.428: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.432: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.435: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.457: INFO: Unable to read jessie_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:50.485: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_udp@dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_udp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local] May 19 01:10:55.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.433: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.455: INFO: Unable to read jessie_udp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.463: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:10:55.479: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_udp@dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_udp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3155.svc.cluster.local] May 19 01:11:00.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:11:00.460: INFO: Unable to read jessie_tcp@dns-test-service.dns-3155.svc.cluster.local from pod dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57: the server could not find the requested resource (get pods dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57) May 19 01:11:00.486: INFO: Lookups using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 failed for: [wheezy_tcp@dns-test-service.dns-3155.svc.cluster.local jessie_tcp@dns-test-service.dns-3155.svc.cluster.local] May 19 01:11:05.485: INFO: DNS probes using dns-3155/dns-test-80c2cf96-cb9b-4634-8df1-00177f24eb57 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:11:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3155" for this suite. • [SLOW TEST:41.328 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":286,"skipped":4740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:11:06.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 19 01:11:10.807: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4953 pod-service-account-d7b746dd-3d5d-436f-b453-2533836e1a9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 19 01:11:11.015: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4953 pod-service-account-d7b746dd-3d5d-436f-b453-2533836e1a9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 19 01:11:11.222: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4953 pod-service-account-d7b746dd-3d5d-436f-b453-2533836e1a9b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:11:11.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4953" for this suite. • [SLOW TEST:5.336 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":287,"skipped":4767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 19 01:11:11.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-a8e800ab-1c12-4b85-972c-1b8d8acb0e1a in namespace container-probe-3295 May 19 01:11:15.650: INFO: Started pod test-webserver-a8e800ab-1c12-4b85-972c-1b8d8acb0e1a in namespace container-probe-3295 STEP: checking the pod's current state and verifying that restartCount is present May 19 01:11:15.653: INFO: Initial restart count of pod test-webserver-a8e800ab-1c12-4b85-972c-1b8d8acb0e1a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 19 01:15:16.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3295" for this suite. • [SLOW TEST:244.797 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4798,"failed":0} SSSSSSSSSMay 19 01:15:16.310: INFO: Running AfterSuite actions on all nodes May 19 01:15:16.310: INFO: Running AfterSuite actions on node 1 May 19 01:15:16.310: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5819.263 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS