Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654903154 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 10 23:19:16.344: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.349: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 10 23:19:16.381: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:19:16.448: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:19:16.448: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:19:16.448: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:19:16.448: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:19:16.448: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 10 23:19:16.465: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 10 23:19:16.465: INFO: e2e test version: v1.21.9 Jun 10 23:19:16.466: INFO: kube-apiserver version: v1.21.1 Jun 10 23:19:16.467: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.473: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 10 23:19:16.469: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.491: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 10 23:19:16.471: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.493: INFO: Cluster IP family: ipv4 Jun 10 23:19:16.470: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.493: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Jun 10 23:19:16.475: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.499: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 10 23:19:16.477: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.499: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 10 23:19:16.479: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.503: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ Jun 10 23:19:16.485: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.509: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 10 23:19:16.505: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.530: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ Jun 10 23:19:16.518: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:19:16.541: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0610 23:19:16.582193 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.582: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.585: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Jun 10 23:19:16.587: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:16.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4949" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.058 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should handle updates to ExternalTrafficPolicy field [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0610 23:19:16.896021 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.896: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.898: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 STEP: creating service nodeport-range-test with type NodePort in namespace services-9222 STEP: changing service nodeport-range-test to out-of-range NodePort 16457 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 16457 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:16.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9222" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SSSS ------------------------------ {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":72,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-3466 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-3466 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:16.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3466" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol W0610 23:19:16.904393 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.904: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.906: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 23:19:16.927: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 10 23:19:16.930: INFO: starting watch STEP: patching STEP: updating Jun 10 23:19:16.938: INFO: waiting for watch events with expected annotations Jun 10 23:19:16.938: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Jun 10 23:19:16.938: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:16.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-6670" for this suite. •SSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0610 23:19:17.285670 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:17.286: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:17.287: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Jun 10 23:19:17.289: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:17.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-910" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0610 23:19:17.029570 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:17.029: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:17.031: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-9375 STEP: deleting original service nodeport-reuse Jun 10 23:19:17.057: INFO: Creating new host exec pod Jun 10 23:19:17.073: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.077: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.077: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.081: INFO: The status of Pod hostexec is Running (Ready = true) Jun 10 23:19:23.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9375 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30999' | tail -n +2 | grep LISTEN' Jun 10 23:19:23.450: INFO: stderr: "+ ss -ant46 'sport = :30999'\n+ tail -n +2\n+ grep LISTEN\n" Jun 10 23:19:23.450: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 30999 STEP: deleting service nodeport-reuse in namespace services-9375 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:23.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9375" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:6.470 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-1037 Jun 10 23:19:17.408: INFO: hairpin-test cluster ip: 10.233.16.242 STEP: creating a client/server pod Jun 10 23:19:17.421: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.426: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.424: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.425: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.424: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:27.424: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:29.426: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:31.424: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:33.426: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-1037 to expose endpoints map[hairpin:[8080]] Jun 10 23:19:33.435: INFO: successfully validated that service hairpin-test in namespace services-1037 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself Jun 10 23:19:34.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1037 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Jun 10 23:19:34.720: INFO: stderr: "+ nc -v -t -w 2 hairpin-test 8080\n+ echo hostName\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n" Jun 10 23:19:34.720: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jun 10 23:19:34.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1037 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.242 8080' Jun 10 23:19:34.976: INFO: stderr: "+ nc -v -t -w 2 10.233.16.242 8080\nConnection to 10.233.16.242 8080 port [tcp/http-alt] succeeded!\n+ echo hostName\n" Jun 10 23:19:34.976: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:34.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1037" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.610 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 ------------------------------ {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:35.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node node1 Jun 10 23:19:35.199: INFO: Creating new exec pod Jun 10 23:19:41.217: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1 Jun 10 23:19:41.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5563 exec execpod-noendpointsgdjh8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Jun 10 23:19:42.484: INFO: rc: 1 Jun 10 23:19:42.484: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5563 exec execpod-noendpointsgdjh8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 REFUSED command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:42.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5563" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:7.332 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 ------------------------------ {"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:42.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename networkpolicies STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jun 10 23:19:42.921: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jun 10 23:19:42.924: INFO: starting watch STEP: patching STEP: updating Jun 10 23:19:42.934: INFO: waiting for watch events with expected annotations Jun 10 23:19:42.934: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Jun 10 23:19:42.934: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:42.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "networkpolicies-2166" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0610 23:19:16.729722 41 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.729: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.731: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-1831 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:16.840: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:16.874: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:18.880: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:20.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:22.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:24.877: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:26.878: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:28.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:30.881: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:32.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:34.877: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:36.878: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:38.881: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:38.886: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:44.908: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:44.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:44.915: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:44.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1831" for this suite. S [SKIPPING] [28.219 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0610 23:19:16.837322 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.837: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.839: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: http [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 STEP: Performing setup for networking test in namespace nettest-9971 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:16.993: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:17.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.030: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.030: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.031: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:27.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.033: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.032: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.031: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.032: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:39.037: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:45.059: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:45.059: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:45.072: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9971" for this suite. S [SKIPPING] [28.268 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: http [LinuxOnly] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:16.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0610 23:19:16.925122 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:16.925: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:16.927: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 STEP: Performing setup for networking test in namespace nettest-400 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:17.042: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:17.076: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.080: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.081: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.082: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:25.081: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:27.080: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.084: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.080: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.081: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.080: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.081: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.083: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:39.088: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:45.108: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:45.108: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:45.115: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:45.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-400" for this suite. S [SKIPPING] [28.224 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:23.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 STEP: Performing setup for networking test in namespace nettest-4838 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:23.808: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:23.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:27.847: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.846: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:41.844: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:43.848: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:45.845: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:45.850: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:53.890: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:53.890: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:53.899: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:53.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4838" for this suite. S [SKIPPING] [30.237 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check kube-proxy urls [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 STEP: Performing setup for networking test in namespace nettest-3127 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:17.236: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:17.266: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.271: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.270: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.269: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:27.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.271: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.271: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:39.277: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:41.281: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:43.282: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:45.280: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:47.282: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:49.281: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:57.306: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:57.306: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:57.314: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:57.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3127" for this suite. S [SKIPPING] [40.220 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 STEP: Performing setup for networking test in namespace nettest-5149 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:17.269: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:17.300: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.304: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.303: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:27.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.305: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.303: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.304: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:39.309: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:41.313: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:43.314: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:45.313: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:47.312: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:49.313: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:19:57.337: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:19:57.337: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:57.343: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:57.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5149" for this suite. S [SKIPPING] [40.221 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for multiple endpoint-Services with same selector [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:57.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Jun 10 23:19:57.515: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:19:57.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8326" for this suite. S [SKIPPING] [0.034 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:17.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0610 23:19:17.282452 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:19:17.282: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:19:17.284: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 STEP: Performing setup for networking test in namespace nettest-1990 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:17.385: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:17.418: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:19.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:21.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:23.422: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:25.422: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:27.424: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:29.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:31.422: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:33.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:35.422: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:37.425: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:39.422: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:19:39.427: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:41.430: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:43.431: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:45.432: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:47.431: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:19:49.431: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:20:01.457: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:20:01.457: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:20:01.464: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:01.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1990" for this suite. S [SKIPPING] [44.213 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:45.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 Jun 10 23:19:45.493: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:47.498: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:49.496: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:51.497: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:53.497: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jun 10 23:19:53.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7216 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jun 10 23:19:54.432: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jun 10 23:19:54.432: INFO: stdout: "iptables" Jun 10 23:19:54.432: INFO: proxyMode: iptables Jun 10 23:19:54.438: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jun 10 23:19:54.441: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-7216 Jun 10 23:19:54.447: INFO: sourceip-test cluster ip: 10.233.62.115 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip Jun 10 23:19:54.474: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:56.478: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:58.479: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace services-7216 to expose endpoints map[echo-sourceip:[8080]] Jun 10 23:19:58.488: INFO: successfully validated that service sourceip-test in namespace services-7216 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment Jun 10 23:19:58.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Jun 10 23:20:00.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:02.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:04.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:06.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:08.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:10.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500009, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790499998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-69f8997fb9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 10 23:20:12.504: INFO: Waiting up to 2m0s to get response from 10.233.62.115:8080 Jun 10 23:20:12.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7216 exec pause-pod-69f8997fb9-f6pxx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.62.115:8080/clientip' Jun 10 23:20:13.002: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.62.115:8080/clientip\n" Jun 10 23:20:13.002: INFO: stdout: "10.244.4.103:59900" STEP: Verifying the preserved source ip Jun 10 23:20:13.002: INFO: Waiting up to 2m0s to get response from 10.233.62.115:8080 Jun 10 23:20:13.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7216 exec pause-pod-69f8997fb9-tb8cg -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.62.115:8080/clientip' Jun 10 23:20:13.256: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.62.115:8080/clientip\n" Jun 10 23:20:13.256: INFO: stdout: "10.244.3.120:50946" STEP: Verifying the preserved source ip Jun 10 23:20:13.256: INFO: Deleting deployment Jun 10 23:20:13.260: INFO: Cleaning up the echo server pod Jun 10 23:20:13.268: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:13.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7216" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:27.834 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 ------------------------------ {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:45.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-66c955e1-4cf2-4e95-8db2-2ebc40686c41] STEP: Verifying pods for RC slow-terminating-unready-pod Jun 10 23:19:45.183: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Jun 10 23:19:51.196: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-2m85t]: "NOW: 2022-06-10 23:19:51.194782064 +0000 UTC m=+2.235523222", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-2668.svc.cluster.local Jun 10 23:19:51.196: INFO: Creating new exec pod Jun 10 23:19:57.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/' Jun 10 23:19:57.473: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/\n" Jun 10 23:19:57.473: INFO: stdout: "NOW: 2022-06-10 23:19:57.463242185 +0000 UTC m=+8.503983349" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-2668 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Jun 10 23:20:02.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/; test "$?" -ne "0"' Jun 10 23:20:05.009: INFO: rc: 1 Jun 10 23:20:05.009: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-06-10 23:20:04.997571506 +0000 UTC m=+16.038312664, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/; test "$?" -ne "0": Command stdout: NOW: 2022-06-10 23:20:04.997571506 +0000 UTC m=+16.038312664 stderr: + curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/ + test 0 -ne 0 command terminated with exit code 1 error: exit status 1 Jun 10 23:20:07.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/; test "$?" -ne "0"' Jun 10 23:20:07.773: INFO: rc: 1 Jun 10 23:20:07.773: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-06-10 23:20:07.762163702 +0000 UTC m=+18.802904860, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/; test "$?" -ne "0": Command stdout: NOW: 2022-06-10 23:20:07.762163702 +0000 UTC m=+18.802904860 stderr: + curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/ + test 0 -ne 0 command terminated with exit code 1 error: exit status 1 Jun 10 23:20:09.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/; test "$?" -ne "0"' Jun 10 23:20:10.375: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/\n+ test 7 -ne 0\n" Jun 10 23:20:10.375: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Jun 10 23:20:10.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/' Jun 10 23:20:11.747: INFO: rc: 7 Jun 10 23:20:11.747: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/: Command stdout: stderr: + curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/ command terminated with exit code 7 error: exit status 7 Jun 10 23:20:13.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2668 exec execpod-m927w -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/' Jun 10 23:20:14.170: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-2668.svc.cluster.local:80/\n" Jun 10 23:20:14.170: INFO: stdout: "NOW: 2022-06-10 23:20:14.161375302 +0000 UTC m=+25.202116460" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-2668 STEP: deleting service tolerate-unready in namespace services-2668 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:14.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2668" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:29.057 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 ------------------------------ {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:45.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-251 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:45.324: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:45.356: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:47.360: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:49.360: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:51.360: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:53.361: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:55.360: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:57.360: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:19:59.364: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:01.361: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:03.361: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:05.360: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:07.360: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:20:07.364: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:20:17.401: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:20:17.401: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:20:17.408: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:17.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-251" for this suite. S [SKIPPING] [32.200 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:54.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 STEP: Performing setup for networking test in namespace nettest-5767 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:54.155: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:54.188: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:56.192: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:58.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:00.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:02.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:04.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:06.192: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:08.194: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:10.192: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:12.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:14.192: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:16.191: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:18.193: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:20.193: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:20:20.198: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:20:24.219: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:20:24.219: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:20:24.227: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:24.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5767" for this suite. S [SKIPPING] [30.187 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:58.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 STEP: Performing setup for networking test in namespace nettest-3973 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 10 23:19:58.222: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:19:58.255: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:00.258: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:02.261: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:04.264: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:06.258: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:08.263: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:10.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:12.263: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:14.260: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:16.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:18.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Jun 10 23:20:20.259: INFO: The status of Pod netserver-0 is Running (Ready = true) Jun 10 23:20:20.263: INFO: The status of Pod netserver-1 is Running (Ready = false) Jun 10 23:20:22.267: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jun 10 23:20:26.288: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Jun 10 23:20:26.288: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jun 10 23:20:26.295: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:26.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3973" for this suite. S [SKIPPING] [28.180 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:20:26.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Jun 10 23:20:26.375: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Jun 10 23:20:26.481: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:26.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9215" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.152 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:20:26.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide unchanging, static URL paths for kubernetes api services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112 STEP: testing: /healthz STEP: testing: /api STEP: testing: /apis STEP: testing: /metrics STEP: testing: /openapi/v2 STEP: testing: /version STEP: testing: /logs [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:27.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1870" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":654,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:19:43.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-7583 STEP: creating a client pod for probing the service svc-udp Jun 10 23:19:43.641: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:45.644: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:47.645: INFO: The status of Pod pod-client is Running (Ready = true) Jun 10 23:19:47.658: INFO: Pod client logs: Fri Jun 10 23:19:45 UTC 2022 Fri Jun 10 23:19:45 UTC 2022 Try: 1 Fri Jun 10 23:19:45 UTC 2022 Try: 2 Fri Jun 10 23:19:45 UTC 2022 Try: 3 Fri Jun 10 23:19:45 UTC 2022 Try: 4 Fri Jun 10 23:19:45 UTC 2022 Try: 5 Fri Jun 10 23:19:45 UTC 2022 Try: 6 Fri Jun 10 23:19:45 UTC 2022 Try: 7 STEP: creating a backend pod pod-server-1 for the service svc-udp Jun 10 23:19:47.672: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:49.676: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:51.676: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:53.676: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:55.677: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:57.677: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:19:59.675: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:01.676: INFO: The status of Pod pod-server-1 is Running (Ready = true) STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7583 to expose endpoints map[pod-server-1:[80]] Jun 10 23:20:01.687: INFO: successfully validated that service svc-udp in namespace conntrack-7583 exposes endpoints map[pod-server-1:[80]] STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208 STEP: creating a second backend pod pod-server-2 for the service svc-udp Jun 10 23:20:16.720: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:18.727: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:20.726: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:22.724: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Jun 10 23:20:24.724: INFO: The status of Pod pod-server-2 is Running (Ready = true) Jun 10 23:20:24.727: INFO: Cleaning up pod-server-1 pod Jun 10 23:20:24.736: INFO: Waiting for pod pod-server-1 to disappear Jun 10 23:20:24.739: INFO: Pod pod-server-1 no longer exists STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7583 to expose endpoints map[pod-server-2:[80]] Jun 10 23:20:24.746: INFO: successfully validated that service svc-udp in namespace conntrack-7583 exposes endpoints map[pod-server-2:[80]] STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208 [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:20:34.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-7583" for this suite. • [SLOW TEST:51.177 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":4,"skipped":906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:20:34.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Jun 10 23:20:34.957: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Jun 10 23:19:17.011: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:19.017: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:21.020: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:23.019: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:25.016: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:27.017: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:29.020: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:31.020: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Jun 10 23:19:31.041: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:33.051: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:35.044: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Jun 10 23:20:35.081: INFO: boom-server pod logs: 2022/06/10 23:19:26 external ip: 10.244.4.81
2022/06/10 23:19:26 listen on 0.0.0.0:9000
2022/06/10 23:19:26 probing 10.244.4.81
2022/06/10 23:19:35 tcp packet: &{SrcPort:41488 DestPort:9000 Seq:938684203 Ack:0 Flags:40962 WindowSize:29200 Checksum:48731 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:35 tcp packet: &{SrcPort:41488 DestPort:9000 Seq:938684204 Ack:2120373798 Flags:32784 WindowSize:229 Checksum:46156 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:35 connection established
2022/06/10 23:19:35 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 162 16 126 96 207 134 55 243 47 44 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:35 checksumer: &{sum:458709 oddByte:33 length:39}
2022/06/10 23:19:35 ret:  458742
2022/06/10 23:19:35 ret:  65532
2022/06/10 23:19:35 ret:  65532
2022/06/10 23:19:35 boom packet injected
2022/06/10 23:19:35 tcp packet: &{SrcPort:41488 DestPort:9000 Seq:938684204 Ack:2120373798 Flags:32785 WindowSize:229 Checksum:46155 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:37 tcp packet: &{SrcPort:36691 DestPort:9000 Seq:2725089685 Ack:0 Flags:40962 WindowSize:29200 Checksum:64610 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:37 tcp packet: &{SrcPort:36691 DestPort:9000 Seq:2725089686 Ack:608732330 Flags:32784 WindowSize:229 Checksum:5658 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:37 connection established
2022/06/10 23:19:37 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 143 83 36 70 254 10 162 109 145 150 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:37 checksumer: &{sum:430436 oddByte:33 length:39}
2022/06/10 23:19:37 ret:  430469
2022/06/10 23:19:37 ret:  37259
2022/06/10 23:19:37 ret:  37259
2022/06/10 23:19:37 boom packet injected
2022/06/10 23:19:37 tcp packet: &{SrcPort:36691 DestPort:9000 Seq:2725089686 Ack:608732330 Flags:32785 WindowSize:229 Checksum:5657 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:39 tcp packet: &{SrcPort:36372 DestPort:9000 Seq:4180572767 Ack:0 Flags:40962 WindowSize:29200 Checksum:46661 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:39 tcp packet: &{SrcPort:36372 DestPort:9000 Seq:4180572768 Ack:1616945774 Flags:32784 WindowSize:229 Checksum:28239 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:39 connection established
2022/06/10 23:19:39 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 142 20 96 95 27 206 249 46 122 96 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:39 checksumer: &{sum:440828 oddByte:33 length:39}
2022/06/10 23:19:39 ret:  440861
2022/06/10 23:19:39 ret:  47651
2022/06/10 23:19:39 ret:  47651
2022/06/10 23:19:39 boom packet injected
2022/06/10 23:19:39 tcp packet: &{SrcPort:36372 DestPort:9000 Seq:4180572768 Ack:1616945774 Flags:32785 WindowSize:229 Checksum:28238 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:41 tcp packet: &{SrcPort:34196 DestPort:9000 Seq:3339903871 Ack:0 Flags:40962 WindowSize:29200 Checksum:32752 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:41 tcp packet: &{SrcPort:34196 DestPort:9000 Seq:3339903872 Ack:2893024677 Flags:32784 WindowSize:229 Checksum:30947 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:41 connection established
2022/06/10 23:19:41 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 133 148 172 110 135 5 199 18 227 128 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:41 checksumer: &{sum:427234 oddByte:33 length:39}
2022/06/10 23:19:41 ret:  427267
2022/06/10 23:19:41 ret:  34057
2022/06/10 23:19:41 ret:  34057
2022/06/10 23:19:41 boom packet injected
2022/06/10 23:19:41 tcp packet: &{SrcPort:34196 DestPort:9000 Seq:3339903872 Ack:2893024677 Flags:32785 WindowSize:229 Checksum:30946 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:43 tcp packet: &{SrcPort:33621 DestPort:9000 Seq:1090465221 Ack:0 Flags:40962 WindowSize:29200 Checksum:46636 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:43 tcp packet: &{SrcPort:33621 DestPort:9000 Seq:1090465222 Ack:3595519714 Flags:32784 WindowSize:229 Checksum:17457 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:43 connection established
2022/06/10 23:19:43 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 131 85 214 77 192 66 64 255 45 198 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:43 checksumer: &{sum:496646 oddByte:33 length:39}
2022/06/10 23:19:43 ret:  496679
2022/06/10 23:19:43 ret:  37934
2022/06/10 23:19:43 ret:  37934
2022/06/10 23:19:43 boom packet injected
2022/06/10 23:19:43 tcp packet: &{SrcPort:33621 DestPort:9000 Seq:1090465222 Ack:3595519714 Flags:32785 WindowSize:229 Checksum:17456 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:45 tcp packet: &{SrcPort:41488 DestPort:9000 Seq:938684205 Ack:2120373799 Flags:32784 WindowSize:229 Checksum:26154 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:45 tcp packet: &{SrcPort:39875 DestPort:9000 Seq:2561426427 Ack:0 Flags:40962 WindowSize:29200 Checksum:10250 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:45 tcp packet: &{SrcPort:39875 DestPort:9000 Seq:2561426428 Ack:49005195 Flags:32784 WindowSize:229 Checksum:1529 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:45 connection established
2022/06/10 23:19:45 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 155 195 2 234 59 235 152 172 67 252 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:45 checksumer: &{sum:600627 oddByte:33 length:39}
2022/06/10 23:19:45 ret:  600660
2022/06/10 23:19:45 ret:  10845
2022/06/10 23:19:45 ret:  10845
2022/06/10 23:19:45 boom packet injected
2022/06/10 23:19:45 tcp packet: &{SrcPort:39875 DestPort:9000 Seq:2561426428 Ack:49005195 Flags:32785 WindowSize:229 Checksum:1528 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:47 tcp packet: &{SrcPort:36691 DestPort:9000 Seq:2725089687 Ack:608732331 Flags:32784 WindowSize:229 Checksum:51191 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:47 tcp packet: &{SrcPort:39892 DestPort:9000 Seq:2578746216 Ack:0 Flags:40962 WindowSize:29200 Checksum:55219 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:47 tcp packet: &{SrcPort:39892 DestPort:9000 Seq:2578746217 Ack:2646150432 Flags:32784 WindowSize:229 Checksum:51268 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:47 connection established
2022/06/10 23:19:47 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 155 212 157 183 134 128 153 180 139 105 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:47 checksumer: &{sum:529250 oddByte:33 length:39}
2022/06/10 23:19:47 ret:  529283
2022/06/10 23:19:47 ret:  5003
2022/06/10 23:19:47 ret:  5003
2022/06/10 23:19:47 boom packet injected
2022/06/10 23:19:47 tcp packet: &{SrcPort:39892 DestPort:9000 Seq:2578746217 Ack:2646150432 Flags:32785 WindowSize:229 Checksum:51267 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:49 tcp packet: &{SrcPort:36372 DestPort:9000 Seq:4180572769 Ack:1616945775 Flags:32784 WindowSize:229 Checksum:8236 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:49 tcp packet: &{SrcPort:40807 DestPort:9000 Seq:243386555 Ack:0 Flags:40962 WindowSize:29200 Checksum:6704 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:49 tcp packet: &{SrcPort:40807 DestPort:9000 Seq:243386556 Ack:3977153074 Flags:32784 WindowSize:229 Checksum:14003 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:49 connection established
2022/06/10 23:19:49 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 159 103 237 13 3 146 14 129 200 188 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:49 checksumer: &{sum:470501 oddByte:33 length:39}
2022/06/10 23:19:49 ret:  470534
2022/06/10 23:19:49 ret:  11789
2022/06/10 23:19:49 ret:  11789
2022/06/10 23:19:49 boom packet injected
2022/06/10 23:19:49 tcp packet: &{SrcPort:40807 DestPort:9000 Seq:243386556 Ack:3977153074 Flags:32785 WindowSize:229 Checksum:14002 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:51 tcp packet: &{SrcPort:34196 DestPort:9000 Seq:3339903873 Ack:2893024678 Flags:32784 WindowSize:229 Checksum:10944 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:51 tcp packet: &{SrcPort:43036 DestPort:9000 Seq:3334424215 Ack:0 Flags:40962 WindowSize:29200 Checksum:54160 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:51 tcp packet: &{SrcPort:43036 DestPort:9000 Seq:3334424216 Ack:3769799747 Flags:32784 WindowSize:229 Checksum:60046 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:51 connection established
2022/06/10 23:19:51 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 168 28 224 177 13 163 198 191 70 152 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:51 checksumer: &{sum:504353 oddByte:33 length:39}
2022/06/10 23:19:51 ret:  504386
2022/06/10 23:19:51 ret:  45641
2022/06/10 23:19:51 ret:  45641
2022/06/10 23:19:51 boom packet injected
2022/06/10 23:19:51 tcp packet: &{SrcPort:43036 DestPort:9000 Seq:3334424216 Ack:3769799747 Flags:32785 WindowSize:229 Checksum:60045 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:53 tcp packet: &{SrcPort:33621 DestPort:9000 Seq:1090465223 Ack:3595519715 Flags:32784 WindowSize:229 Checksum:62989 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:53 tcp packet: &{SrcPort:41328 DestPort:9000 Seq:1105580944 Ack:0 Flags:40962 WindowSize:29200 Checksum:51788 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:53 tcp packet: &{SrcPort:41328 DestPort:9000 Seq:1105580945 Ack:195114972 Flags:32784 WindowSize:229 Checksum:2802 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:53 connection established
2022/06/10 23:19:53 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 161 112 11 159 177 60 65 229 211 145 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:53 checksumer: &{sum:502769 oddByte:33 length:39}
2022/06/10 23:19:53 ret:  502802
2022/06/10 23:19:53 ret:  44057
2022/06/10 23:19:53 ret:  44057
2022/06/10 23:19:53 boom packet injected
2022/06/10 23:19:53 tcp packet: &{SrcPort:41328 DestPort:9000 Seq:1105580945 Ack:195114972 Flags:32785 WindowSize:229 Checksum:2801 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:55 tcp packet: &{SrcPort:39875 DestPort:9000 Seq:2561426429 Ack:49005196 Flags:32784 WindowSize:229 Checksum:47062 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:55 tcp packet: &{SrcPort:38214 DestPort:9000 Seq:242279397 Ack:0 Flags:40962 WindowSize:29200 Checksum:61893 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:55 tcp packet: &{SrcPort:38214 DestPort:9000 Seq:242279398 Ack:3119555811 Flags:32784 WindowSize:229 Checksum:3908 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:55 connection established
2022/06/10 23:19:55 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 149 70 185 239 30 67 14 112 227 230 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:55 checksumer: &{sum:506077 oddByte:33 length:39}
2022/06/10 23:19:55 ret:  506110
2022/06/10 23:19:55 ret:  47365
2022/06/10 23:19:55 ret:  47365
2022/06/10 23:19:55 boom packet injected
2022/06/10 23:19:55 tcp packet: &{SrcPort:38214 DestPort:9000 Seq:242279398 Ack:3119555811 Flags:32785 WindowSize:229 Checksum:3907 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:57 tcp packet: &{SrcPort:37870 DestPort:9000 Seq:3794841122 Ack:0 Flags:40962 WindowSize:29200 Checksum:19791 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:57 tcp packet: &{SrcPort:37870 DestPort:9000 Seq:3794841123 Ack:4045351211 Flags:32784 WindowSize:229 Checksum:42885 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:57 connection established
2022/06/10 23:19:57 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 147 238 241 29 162 139 226 48 174 35 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:57 checksumer: &{sum:447798 oddByte:33 length:39}
2022/06/10 23:19:57 ret:  447831
2022/06/10 23:19:57 ret:  54621
2022/06/10 23:19:57 ret:  54621
2022/06/10 23:19:57 boom packet injected
2022/06/10 23:19:57 tcp packet: &{SrcPort:37870 DestPort:9000 Seq:3794841123 Ack:4045351211 Flags:32785 WindowSize:229 Checksum:42884 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:57 tcp packet: &{SrcPort:39892 DestPort:9000 Seq:2578746218 Ack:2646150433 Flags:32784 WindowSize:229 Checksum:31266 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:59 tcp packet: &{SrcPort:40807 DestPort:9000 Seq:243386557 Ack:3977153075 Flags:32784 WindowSize:229 Checksum:59535 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:59 tcp packet: &{SrcPort:45747 DestPort:9000 Seq:3785939881 Ack:0 Flags:40962 WindowSize:29200 Checksum:63930 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:19:59 tcp packet: &{SrcPort:45747 DestPort:9000 Seq:3785939882 Ack:3506949155 Flags:32784 WindowSize:229 Checksum:51520 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:19:59 connection established
2022/06/10 23:19:59 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 178 179 209 6 69 131 225 168 219 170 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:19:59 checksumer: &{sum:489988 oddByte:33 length:39}
2022/06/10 23:19:59 ret:  490021
2022/06/10 23:19:59 ret:  31276
2022/06/10 23:19:59 ret:  31276
2022/06/10 23:19:59 boom packet injected
2022/06/10 23:19:59 tcp packet: &{SrcPort:45747 DestPort:9000 Seq:3785939882 Ack:3506949155 Flags:32785 WindowSize:229 Checksum:51519 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:01 tcp packet: &{SrcPort:43036 DestPort:9000 Seq:3334424217 Ack:3769799748 Flags:32784 WindowSize:229 Checksum:40043 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:01 tcp packet: &{SrcPort:38384 DestPort:9000 Seq:826603985 Ack:0 Flags:40962 WindowSize:29200 Checksum:41194 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:01 tcp packet: &{SrcPort:38384 DestPort:9000 Seq:826603986 Ack:2525685854 Flags:32784 WindowSize:229 Checksum:34530 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:01 connection established
2022/06/10 23:20:01 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 149 240 150 137 97 190 49 68 249 210 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:01 checksumer: &{sum:538678 oddByte:33 length:39}
2022/06/10 23:20:01 ret:  538711
2022/06/10 23:20:01 ret:  14431
2022/06/10 23:20:01 ret:  14431
2022/06/10 23:20:01 boom packet injected
2022/06/10 23:20:01 tcp packet: &{SrcPort:38384 DestPort:9000 Seq:826603986 Ack:2525685854 Flags:32785 WindowSize:229 Checksum:34529 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:03 tcp packet: &{SrcPort:41328 DestPort:9000 Seq:1105580946 Ack:195114973 Flags:32784 WindowSize:229 Checksum:48332 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:03 tcp packet: &{SrcPort:36313 DestPort:9000 Seq:2452648821 Ack:0 Flags:40962 WindowSize:29200 Checksum:50848 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:03 tcp packet: &{SrcPort:36313 DestPort:9000 Seq:2452648822 Ack:2013229671 Flags:32784 WindowSize:229 Checksum:14666 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:03 connection established
2022/06/10 23:20:03 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 141 217 119 253 235 199 146 48 115 118 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:03 checksumer: &{sum:536180 oddByte:33 length:39}
2022/06/10 23:20:03 ret:  536213
2022/06/10 23:20:03 ret:  11933
2022/06/10 23:20:03 ret:  11933
2022/06/10 23:20:03 boom packet injected
2022/06/10 23:20:03 tcp packet: &{SrcPort:36313 DestPort:9000 Seq:2452648822 Ack:2013229671 Flags:32785 WindowSize:229 Checksum:14665 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:05 tcp packet: &{SrcPort:38214 DestPort:9000 Seq:242279399 Ack:3119555812 Flags:32784 WindowSize:229 Checksum:49441 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:05 tcp packet: &{SrcPort:46861 DestPort:9000 Seq:3407611446 Ack:0 Flags:40962 WindowSize:29200 Checksum:51694 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:05 tcp packet: &{SrcPort:46861 DestPort:9000 Seq:3407611447 Ack:2949941778 Flags:32784 WindowSize:229 Checksum:58694 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:05 connection established
2022/06/10 23:20:05 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 183 13 175 211 3 114 203 28 6 55 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:05 checksumer: &{sum:430010 oddByte:33 length:39}
2022/06/10 23:20:05 ret:  430043
2022/06/10 23:20:05 ret:  36833
2022/06/10 23:20:05 ret:  36833
2022/06/10 23:20:05 boom packet injected
2022/06/10 23:20:05 tcp packet: &{SrcPort:46861 DestPort:9000 Seq:3407611447 Ack:2949941778 Flags:32785 WindowSize:229 Checksum:58693 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:07 tcp packet: &{SrcPort:37870 DestPort:9000 Seq:3794841124 Ack:4045351212 Flags:32784 WindowSize:229 Checksum:22882 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:07 tcp packet: &{SrcPort:35181 DestPort:9000 Seq:3810196125 Ack:0 Flags:40962 WindowSize:29200 Checksum:58199 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:07 tcp packet: &{SrcPort:35181 DestPort:9000 Seq:3810196126 Ack:1305232380 Flags:32784 WindowSize:229 Checksum:40702 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:07 connection established
2022/06/10 23:20:07 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 137 109 77 202 189 92 227 26 250 158 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:07 checksumer: &{sum:472816 oddByte:33 length:39}
2022/06/10 23:20:07 ret:  472849
2022/06/10 23:20:07 ret:  14104
2022/06/10 23:20:07 ret:  14104
2022/06/10 23:20:07 boom packet injected
2022/06/10 23:20:07 tcp packet: &{SrcPort:35181 DestPort:9000 Seq:3810196126 Ack:1305232380 Flags:32785 WindowSize:229 Checksum:40701 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:09 tcp packet: &{SrcPort:45747 DestPort:9000 Seq:3785939883 Ack:3506949156 Flags:32784 WindowSize:229 Checksum:31517 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:09 tcp packet: &{SrcPort:41518 DestPort:9000 Seq:1644919596 Ack:0 Flags:40962 WindowSize:29200 Checksum:49991 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:09 tcp packet: &{SrcPort:41518 DestPort:9000 Seq:1644919597 Ack:3957367328 Flags:32784 WindowSize:229 Checksum:31460 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:09 connection established
2022/06/10 23:20:09 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 162 46 235 223 27 128 98 11 123 45 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:09 checksumer: &{sum:438277 oddByte:33 length:39}
2022/06/10 23:20:09 ret:  438310
2022/06/10 23:20:09 ret:  45100
2022/06/10 23:20:09 ret:  45100
2022/06/10 23:20:09 boom packet injected
2022/06/10 23:20:09 tcp packet: &{SrcPort:41518 DestPort:9000 Seq:1644919597 Ack:3957367328 Flags:32785 WindowSize:229 Checksum:31459 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:11 tcp packet: &{SrcPort:38384 DestPort:9000 Seq:826603987 Ack:2525685855 Flags:32784 WindowSize:229 Checksum:14527 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:11 tcp packet: &{SrcPort:46869 DestPort:9000 Seq:3359930503 Ack:0 Flags:40962 WindowSize:29200 Checksum:17147 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:11 tcp packet: &{SrcPort:46869 DestPort:9000 Seq:3359930504 Ack:2590404902 Flags:32784 WindowSize:229 Checksum:30012 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:11 connection established
2022/06/10 23:20:11 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 183 21 154 100 234 134 200 68 120 136 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:11 checksumer: &{sum:440059 oddByte:33 length:39}
2022/06/10 23:20:11 ret:  440092
2022/06/10 23:20:11 ret:  46882
2022/06/10 23:20:11 ret:  46882
2022/06/10 23:20:11 boom packet injected
2022/06/10 23:20:11 tcp packet: &{SrcPort:46869 DestPort:9000 Seq:3359930504 Ack:2590404902 Flags:32785 WindowSize:229 Checksum:30011 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:13 tcp packet: &{SrcPort:36313 DestPort:9000 Seq:2452648823 Ack:2013229672 Flags:32784 WindowSize:229 Checksum:60198 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:13 tcp packet: &{SrcPort:39613 DestPort:9000 Seq:3063010476 Ack:0 Flags:40962 WindowSize:29200 Checksum:3345 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:13 tcp packet: &{SrcPort:39613 DestPort:9000 Seq:3063010477 Ack:2113579372 Flags:32784 WindowSize:229 Checksum:7079 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:13 connection established
2022/06/10 23:20:13 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 154 189 125 249 34 204 182 145 212 173 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:13 checksumer: &{sum:568131 oddByte:33 length:39}
2022/06/10 23:20:13 ret:  568164
2022/06/10 23:20:13 ret:  43884
2022/06/10 23:20:13 ret:  43884
2022/06/10 23:20:13 boom packet injected
2022/06/10 23:20:13 tcp packet: &{SrcPort:39613 DestPort:9000 Seq:3063010477 Ack:2113579372 Flags:32785 WindowSize:229 Checksum:7078 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:15 tcp packet: &{SrcPort:46861 DestPort:9000 Seq:3407611448 Ack:2949941779 Flags:32784 WindowSize:229 Checksum:38692 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:15 tcp packet: &{SrcPort:38971 DestPort:9000 Seq:3337276617 Ack:0 Flags:40962 WindowSize:29200 Checksum:65355 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:15 tcp packet: &{SrcPort:38971 DestPort:9000 Seq:3337276618 Ack:1082496115 Flags:32784 WindowSize:229 Checksum:22656 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:15 connection established
2022/06/10 23:20:15 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 152 59 64 132 13 211 198 234 204 202 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:15 checksumer: &{sum:536823 oddByte:33 length:39}
2022/06/10 23:20:15 ret:  536856
2022/06/10 23:20:15 ret:  12576
2022/06/10 23:20:15 ret:  12576
2022/06/10 23:20:15 boom packet injected
2022/06/10 23:20:15 tcp packet: &{SrcPort:38971 DestPort:9000 Seq:3337276618 Ack:1082496115 Flags:32785 WindowSize:229 Checksum:22655 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:17 tcp packet: &{SrcPort:35181 DestPort:9000 Seq:3810196127 Ack:1305232381 Flags:32784 WindowSize:229 Checksum:20699 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:17 tcp packet: &{SrcPort:37769 DestPort:9000 Seq:2446299222 Ack:0 Flags:40962 WindowSize:29200 Checksum:28092 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:17 tcp packet: &{SrcPort:37769 DestPort:9000 Seq:2446299223 Ack:482443061 Flags:32784 WindowSize:229 Checksum:63521 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:17 connection established
2022/06/10 23:20:17 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 147 137 28 191 248 149 145 207 144 87 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:17 checksumer: &{sum:519752 oddByte:33 length:39}
2022/06/10 23:20:17 ret:  519785
2022/06/10 23:20:17 ret:  61040
2022/06/10 23:20:17 ret:  61040
2022/06/10 23:20:17 boom packet injected
2022/06/10 23:20:17 tcp packet: &{SrcPort:37769 DestPort:9000 Seq:2446299223 Ack:482443061 Flags:32785 WindowSize:229 Checksum:63520 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:19 tcp packet: &{SrcPort:41518 DestPort:9000 Seq:1644919598 Ack:3957367329 Flags:32784 WindowSize:229 Checksum:11457 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:19 tcp packet: &{SrcPort:37656 DestPort:9000 Seq:909892398 Ack:0 Flags:40962 WindowSize:29200 Checksum:30489 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:19 tcp packet: &{SrcPort:37656 DestPort:9000 Seq:909892399 Ack:4008975174 Flags:32784 WindowSize:229 Checksum:35690 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:19 connection established
2022/06/10 23:20:19 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 147 24 238 242 148 166 54 59 219 47 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:19 checksumer: &{sum:460198 oddByte:33 length:39}
2022/06/10 23:20:19 ret:  460231
2022/06/10 23:20:19 ret:  1486
2022/06/10 23:20:19 ret:  1486
2022/06/10 23:20:19 boom packet injected
2022/06/10 23:20:19 tcp packet: &{SrcPort:37656 DestPort:9000 Seq:909892399 Ack:4008975174 Flags:32785 WindowSize:229 Checksum:35689 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:21 tcp packet: &{SrcPort:46869 DestPort:9000 Seq:3359930505 Ack:2590404903 Flags:32784 WindowSize:229 Checksum:10010 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:21 tcp packet: &{SrcPort:41192 DestPort:9000 Seq:3596819306 Ack:0 Flags:40962 WindowSize:29200 Checksum:33044 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:21 tcp packet: &{SrcPort:41192 DestPort:9000 Seq:3596819307 Ack:2353533108 Flags:32784 WindowSize:229 Checksum:64210 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:21 connection established
2022/06/10 23:20:21 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 160 232 140 70 138 20 214 99 27 107 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:21 checksumer: &{sum:457511 oddByte:33 length:39}
2022/06/10 23:20:21 ret:  457544
2022/06/10 23:20:21 ret:  64334
2022/06/10 23:20:21 ret:  64334
2022/06/10 23:20:21 boom packet injected
2022/06/10 23:20:21 tcp packet: &{SrcPort:41192 DestPort:9000 Seq:3596819307 Ack:2353533108 Flags:32785 WindowSize:229 Checksum:64209 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:23 tcp packet: &{SrcPort:39613 DestPort:9000 Seq:3063010478 Ack:2113579373 Flags:32784 WindowSize:229 Checksum:52611 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:23 tcp packet: &{SrcPort:42673 DestPort:9000 Seq:1900692035 Ack:0 Flags:40962 WindowSize:29200 Checksum:45469 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:23 tcp packet: &{SrcPort:42673 DestPort:9000 Seq:1900692036 Ack:373787913 Flags:32784 WindowSize:229 Checksum:7449 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:23 connection established
2022/06/10 23:20:23 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 166 177 22 70 6 105 113 74 66 68 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:23 checksumer: &{sum:448501 oddByte:33 length:39}
2022/06/10 23:20:23 ret:  448534
2022/06/10 23:20:23 ret:  55324
2022/06/10 23:20:23 ret:  55324
2022/06/10 23:20:23 boom packet injected
2022/06/10 23:20:23 tcp packet: &{SrcPort:42673 DestPort:9000 Seq:1900692036 Ack:373787913 Flags:32785 WindowSize:229 Checksum:7448 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:25 tcp packet: &{SrcPort:38971 DestPort:9000 Seq:3337276619 Ack:1082496116 Flags:32784 WindowSize:229 Checksum:2653 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:25 tcp packet: &{SrcPort:35887 DestPort:9000 Seq:1501348318 Ack:0 Flags:40962 WindowSize:29200 Checksum:23681 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:25 tcp packet: &{SrcPort:35887 DestPort:9000 Seq:1501348319 Ack:1880411294 Flags:32784 WindowSize:229 Checksum:9931 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:25 connection established
2022/06/10 23:20:25 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 140 47 112 19 69 254 89 124 193 223 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:25 checksumer: &{sum:493019 oddByte:33 length:39}
2022/06/10 23:20:25 ret:  493052
2022/06/10 23:20:25 ret:  34307
2022/06/10 23:20:25 ret:  34307
2022/06/10 23:20:25 boom packet injected
2022/06/10 23:20:25 tcp packet: &{SrcPort:35887 DestPort:9000 Seq:1501348319 Ack:1880411294 Flags:32785 WindowSize:229 Checksum:9930 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:27 tcp packet: &{SrcPort:37769 DestPort:9000 Seq:2446299224 Ack:482443062 Flags:32784 WindowSize:229 Checksum:43518 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:27 tcp packet: &{SrcPort:32861 DestPort:9000 Seq:404283007 Ack:0 Flags:40962 WindowSize:29200 Checksum:34117 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:27 tcp packet: &{SrcPort:32861 DestPort:9000 Seq:404283008 Ack:3886625167 Flags:32784 WindowSize:229 Checksum:27448 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:27 connection established
2022/06/10 23:20:27 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 128 93 231 167 170 239 24 24 222 128 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:27 checksumer: &{sum:489095 oddByte:33 length:39}
2022/06/10 23:20:27 ret:  489128
2022/06/10 23:20:27 ret:  30383
2022/06/10 23:20:27 ret:  30383
2022/06/10 23:20:27 boom packet injected
2022/06/10 23:20:27 tcp packet: &{SrcPort:32861 DestPort:9000 Seq:404283008 Ack:3886625167 Flags:32785 WindowSize:229 Checksum:27447 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:29 tcp packet: &{SrcPort:37656 DestPort:9000 Seq:909892400 Ack:4008975175 Flags:32784 WindowSize:229 Checksum:15687 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:29 tcp packet: &{SrcPort:45337 DestPort:9000 Seq:2599002687 Ack:0 Flags:40962 WindowSize:29200 Checksum:1576 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:29 tcp packet: &{SrcPort:45337 DestPort:9000 Seq:2599002688 Ack:2074073369 Flags:32784 WindowSize:229 Checksum:43208 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:29 connection established
2022/06/10 23:20:29 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 177 25 123 158 82 121 154 233 162 64 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:29 checksumer: &{sum:476218 oddByte:33 length:39}
2022/06/10 23:20:29 ret:  476251
2022/06/10 23:20:29 ret:  17506
2022/06/10 23:20:29 ret:  17506
2022/06/10 23:20:29 boom packet injected
2022/06/10 23:20:29 tcp packet: &{SrcPort:45337 DestPort:9000 Seq:2599002688 Ack:2074073369 Flags:32785 WindowSize:229 Checksum:43207 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:31 tcp packet: &{SrcPort:41192 DestPort:9000 Seq:3596819308 Ack:2353533109 Flags:32784 WindowSize:229 Checksum:44207 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:31 tcp packet: &{SrcPort:44967 DestPort:9000 Seq:685291653 Ack:0 Flags:40962 WindowSize:29200 Checksum:23443 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:31 tcp packet: &{SrcPort:44967 DestPort:9000 Seq:685291654 Ack:37770200 Flags:32784 WindowSize:229 Checksum:62723 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:31 connection established
2022/06/10 23:20:31 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 175 167 2 62 205 56 40 216 184 134 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:31 checksumer: &{sum:484830 oddByte:33 length:39}
2022/06/10 23:20:31 ret:  484863
2022/06/10 23:20:31 ret:  26118
2022/06/10 23:20:31 ret:  26118
2022/06/10 23:20:31 boom packet injected
2022/06/10 23:20:31 tcp packet: &{SrcPort:44967 DestPort:9000 Seq:685291654 Ack:37770200 Flags:32785 WindowSize:229 Checksum:62722 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:33 tcp packet: &{SrcPort:42673 DestPort:9000 Seq:1900692037 Ack:373787914 Flags:32784 WindowSize:229 Checksum:52981 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:33 tcp packet: &{SrcPort:40252 DestPort:9000 Seq:1938525962 Ack:0 Flags:40962 WindowSize:29200 Checksum:17654 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.107
2022/06/10 23:20:33 tcp packet: &{SrcPort:40252 DestPort:9000 Seq:1938525963 Ack:1458313215 Flags:32784 WindowSize:229 Checksum:48579 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.107
2022/06/10 23:20:33 connection established
2022/06/10 23:20:33 calling checksumTCP: 10.244.4.81 10.244.3.107 [35 40 157 60 86 234 145 95 115 139 143 11 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/10 23:20:33 checksumer: &{sum:460294 oddByte:33 length:39}
2022/06/10 23:20:33 ret:  460327
2022/06/10 23:20:33 ret:  1582
2022/06/10 23:20:33 ret:  1582
2022/06/10 23:20:33 boom packet injected
2022/06/10 23:20:33 tcp packet: &{SrcPort:40252 DestPort:9000 Seq:1938525963 Ack:1458313215 Flags:32785 WindowSize:229 Checksum:48578 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.107

Jun 10 23:20:35.081: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:35.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6633" for this suite.


• [SLOW TEST:78.118 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":83,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:35.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 10 23:20:35.436: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:35.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1260" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:24.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Jun 10 23:20:34.650: INFO: Waiting up to 2m0s to get response from 10.244.2.4:8080
Jun 10 23:20:34.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2nlrj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip'
Jun 10 23:20:34.896: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip\n"
Jun 10 23:20:34.897: INFO: stdout: "10.244.0.10:46468"
STEP: Verifying the preserved source ip
Jun 10 23:20:34.897: INFO: Waiting up to 2m0s to get response from 10.244.3.129:8080
Jun 10 23:20:34.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2nlrj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip'
Jun 10 23:20:35.148: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip\n"
Jun 10 23:20:35.148: INFO: stdout: "10.244.0.10:39742"
STEP: Verifying the preserved source ip
Jun 10 23:20:35.148: INFO: Waiting up to 2m0s to get response from 10.244.4.113:8080
Jun 10 23:20:35.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2nlrj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip'
Jun 10 23:20:35.387: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip\n"
Jun 10 23:20:35.387: INFO: stdout: "10.244.0.10:34524"
STEP: Verifying the preserved source ip
Jun 10 23:20:35.387: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Jun 10 23:20:35.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2nlrj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Jun 10 23:20:35.635: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Jun 10 23:20:35.635: INFO: stdout: "10.244.0.10:39174"
STEP: Verifying the preserved source ip
Jun 10 23:20:35.635: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun 10 23:20:35.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2t427 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun 10 23:20:35.871: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun 10 23:20:35.871: INFO: stdout: "10.244.2.4:59134"
STEP: Verifying the preserved source ip
Jun 10 23:20:35.871: INFO: Waiting up to 2m0s to get response from 10.244.3.129:8080
Jun 10 23:20:35.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2t427 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip'
Jun 10 23:20:36.122: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip\n"
Jun 10 23:20:36.122: INFO: stdout: "10.244.2.4:50564"
STEP: Verifying the preserved source ip
Jun 10 23:20:36.122: INFO: Waiting up to 2m0s to get response from 10.244.4.113:8080
Jun 10 23:20:36.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2t427 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip'
Jun 10 23:20:36.381: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip\n"
Jun 10 23:20:36.382: INFO: stdout: "10.244.2.4:37390"
STEP: Verifying the preserved source ip
Jun 10 23:20:36.382: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Jun 10 23:20:36.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test2t427 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Jun 10 23:20:36.626: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Jun 10 23:20:36.626: INFO: stdout: "10.244.2.4:46352"
STEP: Verifying the preserved source ip
Jun 10 23:20:36.626: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun 10 23:20:36.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6lc2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun 10 23:20:37.154: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun 10 23:20:37.154: INFO: stdout: "10.244.3.129:59682"
STEP: Verifying the preserved source ip
Jun 10 23:20:37.154: INFO: Waiting up to 2m0s to get response from 10.244.2.4:8080
Jun 10 23:20:37.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6lc2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip'
Jun 10 23:20:37.517: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip\n"
Jun 10 23:20:37.518: INFO: stdout: "10.244.3.129:50498"
STEP: Verifying the preserved source ip
Jun 10 23:20:37.518: INFO: Waiting up to 2m0s to get response from 10.244.4.113:8080
Jun 10 23:20:37.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6lc2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip'
Jun 10 23:20:38.115: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip\n"
Jun 10 23:20:38.115: INFO: stdout: "10.244.3.129:38642"
STEP: Verifying the preserved source ip
Jun 10 23:20:38.115: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Jun 10 23:20:38.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6lc2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Jun 10 23:20:38.397: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Jun 10 23:20:38.397: INFO: stdout: "10.244.3.129:53488"
STEP: Verifying the preserved source ip
Jun 10 23:20:38.397: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun 10 23:20:38.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6m6z2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun 10 23:20:38.691: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun 10 23:20:38.691: INFO: stdout: "10.244.4.113:40384"
STEP: Verifying the preserved source ip
Jun 10 23:20:38.691: INFO: Waiting up to 2m0s to get response from 10.244.2.4:8080
Jun 10 23:20:38.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6m6z2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip'
Jun 10 23:20:39.134: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip\n"
Jun 10 23:20:39.134: INFO: stdout: "10.244.4.113:59810"
STEP: Verifying the preserved source ip
Jun 10 23:20:39.134: INFO: Waiting up to 2m0s to get response from 10.244.3.129:8080
Jun 10 23:20:39.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6m6z2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip'
Jun 10 23:20:39.429: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip\n"
Jun 10 23:20:39.429: INFO: stdout: "10.244.4.113:38664"
STEP: Verifying the preserved source ip
Jun 10 23:20:39.429: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Jun 10 23:20:39.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test6m6z2 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Jun 10 23:20:39.717: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Jun 10 23:20:39.717: INFO: stdout: "10.244.4.113:58224"
STEP: Verifying the preserved source ip
Jun 10 23:20:39.717: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun 10 23:20:39.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test84vf9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun 10 23:20:39.973: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun 10 23:20:39.973: INFO: stdout: "10.244.1.6:36172"
STEP: Verifying the preserved source ip
Jun 10 23:20:39.973: INFO: Waiting up to 2m0s to get response from 10.244.2.4:8080
Jun 10 23:20:39.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test84vf9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip'
Jun 10 23:20:40.246: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.4:8080/clientip\n"
Jun 10 23:20:40.246: INFO: stdout: "10.244.1.6:49192"
STEP: Verifying the preserved source ip
Jun 10 23:20:40.246: INFO: Waiting up to 2m0s to get response from 10.244.3.129:8080
Jun 10 23:20:40.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test84vf9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip'
Jun 10 23:20:40.501: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.129:8080/clientip\n"
Jun 10 23:20:40.501: INFO: stdout: "10.244.1.6:38134"
STEP: Verifying the preserved source ip
Jun 10 23:20:40.501: INFO: Waiting up to 2m0s to get response from 10.244.4.113:8080
Jun 10 23:20:40.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-3548 exec no-snat-test84vf9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip'
Jun 10 23:20:40.759: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.113:8080/clientip\n"
Jun 10 23:20:40.759: INFO: stdout: "10.244.1.6:60506"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-3548" for this suite.


• [SLOW TEST:16.211 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":2,"skipped":471,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:41.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun 10 23:20:41.106: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:41.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-7686" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:13.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-5879
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:20:13.692: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:13.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:15.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:17.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:19.728: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:21.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:23.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:25.728: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:27.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:29.728: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:31.920: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:33.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:35.729: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:20:35.734: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:20:45.756: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:20:45.756: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:45.765: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:45.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5879" for this suite.


S [SKIPPING] [32.214 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:45.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 10 23:20:46.014: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:46.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-2203" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:14.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-6412
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:20:14.419: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:14.452: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:16.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:18.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:20.456: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:22.456: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:24.457: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:26.456: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:28.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:30.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:32.458: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:34.459: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:20:34.463: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:20:36.468: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:20:48.505: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:20:48.505: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:48.512: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:48.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6412" for this suite.


S [SKIPPING] [34.217 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:17.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-583
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:20:17.878: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:17.911: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:19.915: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:21.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:23.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:25.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:27.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:29.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:31.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:33.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:35.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:37.916: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:20:37.921: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:20:39.924: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:20:51.971: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:20:51.971: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:51.978: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:51.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-583" for this suite.


S [SKIPPING] [34.217 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:01.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Jun 10 23:20:01.574: INFO: deploying iperf2 server
Jun 10 23:20:01.577: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Jun 10 23:20:01.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Jun 10 23:20:03.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 10 23:20:05.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 10 23:20:07.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63790500001, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 10 23:20:09.597: INFO: waiting for iperf2 server endpoints
Jun 10 23:20:11.600: INFO: found iperf2 server endpoints
Jun 10 23:20:11.601: INFO: waiting for client pods to be running
Jun 10 23:20:21.605: INFO: all client pods are ready: 2 pods
Jun 10 23:20:21.608: INFO: server pod phase Running
Jun 10 23:20:21.608: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 23:20:01 +0000 UTC Reason: Message:}
Jun 10 23:20:21.608: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 23:20:05 +0000 UTC Reason: Message:}
Jun 10 23:20:21.608: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 23:20:05 +0000 UTC Reason: Message:}
Jun 10 23:20:21.608: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-10 23:20:01 +0000 UTC Reason: Message:}
Jun 10 23:20:21.608: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-06-10 23:20:05 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://0a4d644f284a0e6de9c8b8df2e947bfe040a34bf621c9d21ffbc356e6da07c22 Started:0xc000a0ee4c}
Jun 10 23:20:21.609: INFO: found 2 matching client pods
Jun 10 23:20:21.612: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-2923 PodName:iperf2-clients-hf2ms ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:20:21.612: INFO: >>> kubeConfig: /root/.kube/config
Jun 10 23:20:21.699: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun 10 23:20:21.699: INFO: iperf version: 
Jun 10 23:20:21.699: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-hf2ms (node node2)
Jun 10 23:20:21.702: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-2923 PodName:iperf2-clients-hf2ms ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:20:21.702: INFO: >>> kubeConfig: /root/.kube/config
Jun 10 23:20:36.883: INFO: Exec stderr: ""
Jun 10 23:20:36.883: INFO: output from exec on client pod iperf2-clients-hf2ms (node node2): 
20220610232022.801,10.244.4.105,47458,10.233.11.112,6789,3,0.0-1.0,3530948608,28247588864
20220610232023.787,10.244.4.105,47458,10.233.11.112,6789,3,1.0-2.0,3477471232,27819769856
20220610232024.794,10.244.4.105,47458,10.233.11.112,6789,3,2.0-3.0,3470655488,27765243904
20220610232025.801,10.244.4.105,47458,10.233.11.112,6789,3,3.0-4.0,3565551616,28524412928
20220610232026.788,10.244.4.105,47458,10.233.11.112,6789,3,4.0-5.0,3493199872,27945598976
20220610232027.795,10.244.4.105,47458,10.233.11.112,6789,3,5.0-6.0,2958426112,23667408896
20220610232028.802,10.244.4.105,47458,10.233.11.112,6789,3,6.0-7.0,3077046272,24616370176
20220610232029.789,10.244.4.105,47458,10.233.11.112,6789,3,7.0-8.0,3387031552,27096252416
20220610232030.796,10.244.4.105,47458,10.233.11.112,6789,3,8.0-9.0,3528458240,28227665920
20220610232031.803,10.244.4.105,47458,10.233.11.112,6789,3,9.0-10.0,3480223744,27841789952
20220610232031.803,10.244.4.105,47458,10.233.11.112,6789,3,0.0-10.0,33969012736,27175123228

Jun 10 23:20:36.887: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-2923 PodName:iperf2-clients-jvc7n ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:20:36.887: INFO: >>> kubeConfig: /root/.kube/config
Jun 10 23:20:37.069: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun 10 23:20:37.069: INFO: iperf version: 
Jun 10 23:20:37.069: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-jvc7n (node node1)
Jun 10 23:20:37.071: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-2923 PodName:iperf2-clients-jvc7n ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:20:37.071: INFO: >>> kubeConfig: /root/.kube/config
Jun 10 23:20:52.269: INFO: Exec stderr: ""
Jun 10 23:20:52.269: INFO: output from exec on client pod iperf2-clients-jvc7n (node node1): 
20220610232038.199,10.244.3.124,41730,10.233.11.112,6789,3,0.0-1.0,116654080,933232640
20220610232039.184,10.244.3.124,41730,10.233.11.112,6789,3,1.0-2.0,117833728,942669824
20220610232040.251,10.244.3.124,41730,10.233.11.112,6789,3,2.0-3.0,118751232,950009856
20220610232041.198,10.244.3.124,41730,10.233.11.112,6789,3,3.0-4.0,89653248,717225984
20220610232042.204,10.244.3.124,41730,10.233.11.112,6789,3,4.0-5.0,117964800,943718400
20220610232043.210,10.244.3.124,41730,10.233.11.112,6789,3,5.0-6.0,117964800,943718400
20220610232044.197,10.244.3.124,41730,10.233.11.112,6789,3,6.0-7.0,117702656,941621248
20220610232045.186,10.244.3.124,41730,10.233.11.112,6789,3,7.0-8.0,117047296,936378368
20220610232046.198,10.244.3.124,41730,10.233.11.112,6789,3,8.0-9.0,117833728,942669824
20220610232047.207,10.244.3.124,41730,10.233.11.112,6789,3,9.0-10.0,118095872,944766976
20220610232047.207,10.244.3.124,41730,10.233.11.112,6789,3,0.0-10.0,1149501440,918678982

Jun 10 23:20:52.269: INFO:                                From                                 To    Bandwidth (MB/s)
Jun 10 23:20:52.269: INFO:                               node1                              node2                 110
Jun 10 23:20:52.269: INFO:                               node2                              node2                3240
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:52.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-2923" for this suite.


• [SLOW TEST:50.732 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":1,"skipped":292,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:52.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Jun 10 23:20:52.716: INFO: (0) /api/v1/nodes/node1/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3344.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3344.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3344.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3344.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3344.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3344.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 10 23:20:54.183: INFO: DNS probes using dns-3344/dns-test-e402c1cb-a9c6-4f14-9a5f-b5e7be245801 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:54.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3344" for this suite.


• [SLOW TEST:8.120 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":2,"skipped":567,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:54.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun 10 23:20:54.426: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:20:54.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-123" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:52.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Jun 10 23:20:52.300: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-1900" to be "Succeeded or Failed"
Jun 10 23:20:52.302: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089479ms
Jun 10 23:20:54.305: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005104031s
Jun 10 23:20:56.308: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008068864s
Jun 10 23:20:58.311: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011302044s
Jun 10 23:21:00.315: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014872309s
Jun 10 23:21:02.318: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018542595s
Jun 10 23:21:04.322: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.022080941s
STEP: Saw pod success
Jun 10 23:21:04.322: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:04.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1900" for this suite.


• [SLOW TEST:12.157 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":407,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:48.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Jun 10 23:20:48.686: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-38e8a7e4-6794-4d19-b076-0bf3a8e01d50  dns-9681  ef780564-3525-420f-b625-b0b29cba59ca 74529 0 2022-06-10 23:20:48 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-10 23:20:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-xf5nn,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-dxs25,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dxs25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 10 23:20:52.698: INFO: testServerIP is 10.244.3.138
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jun 10 23:20:52.709: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-9681  91966e20-2720-4f62-90f9-d1fbbb2c083d 74646 0 2022-06-10 23:20:52 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-10 23:20:52 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s92c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s92c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.138],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Jun 10 23:21:04.717: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-9681 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:21:04.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Jun 10 23:21:05.139: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-9681 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 10 23:21:05.139: INFO: >>> kubeConfig: /root/.kube/config
Jun 10 23:21:05.409: INFO: Deleting pod e2e-dns-utils...
Jun 10 23:21:05.416: INFO: Deleting pod e2e-configmap-dns-server-38e8a7e4-6794-4d19-b076-0bf3a8e01d50...
Jun 10 23:21:05.422: INFO: Deleting configmap e2e-coredns-configmap-xf5nn...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:05.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9681" for this suite.


• [SLOW TEST:16.784 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":220,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:35.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-2
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:20:35.440: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:35.479: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:37.482: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:39.482: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:41.482: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:43.483: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:45.482: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:47.484: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:49.483: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:51.483: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:53.484: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:55.484: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:20:55.489: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:20:57.492: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:20:59.491: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:21:11.527: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:21:11.527: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:11.534: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:11.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2" for this suite.


S [SKIPPING] [36.233 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:19:57.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-636
STEP: creating service service-headless in namespace services-636
STEP: creating replication controller service-headless in namespace services-636
I0610 23:19:57.833257      28 runners.go:190] Created replication controller with name: service-headless, namespace: services-636, replica count: 3
I0610 23:20:00.885769      28 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:03.890145      28 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:06.890628      28 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:09.891827      28 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-636
STEP: creating service service-headless-toggled in namespace services-636
STEP: creating replication controller service-headless-toggled in namespace services-636
I0610 23:20:09.908995      28 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-636, replica count: 3
I0610 23:20:12.961358      28 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:15.962439      28 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:18.963598      28 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:21.964717      28 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun 10 23:20:21.967: INFO: Creating new host exec pod
Jun 10 23:20:21.981: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:23.985: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:25.984: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:27.987: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:29.985: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:31.984: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:20:31.984: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:20:36.004: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done" in pod services-636/verify-service-up-host-exec-pod
Jun 10 23:20:36.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done'
Jun 10 23:20:36.359: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n"
Jun 10 23:20:36.360: INFO: stdout: "service-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\n"
Jun 10 23:20:36.360: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done" in pod services-636/verify-service-up-exec-pod-6w5p9
Jun 10 23:20:36.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-up-exec-pod-6w5p9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done'
Jun 10 23:20:37.078: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n"
Jun 10 23:20:37.078: INFO: stdout: "service-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-636
STEP: Deleting pod verify-service-up-exec-pod-6w5p9 in namespace services-636
STEP: verifying service-headless is not up
Jun 10 23:20:37.093: INFO: Creating new host exec pod
Jun 10 23:20:37.106: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:39.110: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:41.109: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:20:41.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed'
Jun 10 23:20:43.420: INFO: rc: 28
Jun 10 23:20:43.420: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed" in pod services-636/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.38.60:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-636
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Jun 10 23:20:43.436: INFO: Creating new host exec pod
Jun 10 23:20:43.473: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:45.477: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:47.477: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:49.476: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:20:49.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.23.210:80 && echo service-down-failed'
Jun 10 23:20:51.834: INFO: rc: 28
Jun 10 23:20:51.834: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.23.210:80 && echo service-down-failed" in pod services-636/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.23.210:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.23.210:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-636
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Jun 10 23:20:51.847: INFO: Creating new host exec pod
Jun 10 23:20:51.859: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:53.863: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:55.864: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:57.863: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:59.862: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:01.863: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:21:01.863: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:21:13.881: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done" in pod services-636/verify-service-up-host-exec-pod
Jun 10 23:21:13.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done'
Jun 10 23:21:14.222: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n"
Jun 10 23:21:14.222: INFO: stdout: "service-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\n"
Jun 10 23:21:14.223: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done" in pod services-636/verify-service-up-exec-pod-jtzxl
Jun 10 23:21:14.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-up-exec-pod-jtzxl -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.210:80 2>&1 || true; echo; done'
Jun 10 23:21:14.590: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.210:80\n+ echo\n"
Jun 10 23:21:14.590: INFO: stdout: "service-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-45zbr\nservice-headless-toggled-45zbr\nservice-headless-toggled-p5njc\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-ts8pb\nservice-headless-toggled-p5njc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-636
STEP: Deleting pod verify-service-up-exec-pod-jtzxl in namespace services-636
STEP: verifying service-headless is still not up
Jun 10 23:21:14.607: INFO: Creating new host exec pod
Jun 10 23:21:14.620: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:16.624: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:18.624: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:21:18.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed'
Jun 10 23:21:20.996: INFO: rc: 28
Jun 10 23:21:20.996: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed" in pod services-636/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-636 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.38.60:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.38.60:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-636
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:21.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-636" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:83.210 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":377,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:52.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-4413
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:20:53.039: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:20:53.070: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:55.074: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:57.072: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:20:59.075: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:01.075: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:03.075: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:05.074: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:07.073: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:09.079: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:11.077: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:13.076: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:15.073: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:21:15.078: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:21:21.116: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:21:21.116: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:21.123: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:21.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4413" for this suite.


S [SKIPPING] [28.223 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:54.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Jun 10 23:20:54.981: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:56.985: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:58.988: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:00.986: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:02.986: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun 10 23:21:03.001: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:05.005: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:07.008: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:09.007: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:11.006: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun 10 23:21:13.023: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:15.028: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:17.028: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:19.029: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:21.026: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Jun 10 23:21:21.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-5253 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Jun 10 23:21:21.297: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Jun 10 23:21:21.297: INFO: stdout: "tcp      6 3598 CLOSE_WAIT src=10.244.3.146 dst=10.10.190.208 sport=44846 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=40418 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Jun 10 23:21:21.297: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3598 CLOSE_WAIT src=10.244.3.146 dst=10.10.190.208 sport=44846 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=40418 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:21.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-5253" for this suite.


• [SLOW TEST:26.362 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":3,"skipped":946,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Jun 10 23:21:21.398: INFO: Running AfterSuite actions on all nodes

S
------------------------------
Jun 10 23:21:21.400: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:21:11.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-4517
STEP: creating replication controller externalip-test in namespace services-4517
I0610 23:21:11.607915      25 runners.go:190] Created replication controller with name: externalip-test, namespace: services-4517, replica count: 2
I0610 23:21:14.658761      25 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:17.660295      25 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:20.661088      25 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 10 23:21:20.661: INFO: Creating new exec pod
Jun 10 23:21:25.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4517 exec execpodrtgtb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jun 10 23:21:25.931: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Jun 10 23:21:25.931: INFO: stdout: ""
Jun 10 23:21:26.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4517 exec execpodrtgtb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jun 10 23:21:28.001: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Jun 10 23:21:28.001: INFO: stdout: "externalip-test-rwjx6"
Jun 10 23:21:28.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4517 exec execpodrtgtb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.57.59 80'
Jun 10 23:21:28.871: INFO: stderr: "+ nc -v -t -w 2 10.233.57.59 80\n+ echo hostName\nConnection to 10.233.57.59 80 port [tcp/http] succeeded!\n"
Jun 10 23:21:28.871: INFO: stdout: "externalip-test-rwjx6"
Jun 10 23:21:28.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4517 exec execpodrtgtb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jun 10 23:21:29.334: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\n+ echo hostName\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jun 10 23:21:29.334: INFO: stdout: "externalip-test-fsq7x"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:29.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4517" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:17.767 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":6,"skipped":1132,"failed":0}
Jun 10 23:21:29.344: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:19:16.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0610 23:19:17.004877      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 10 23:19:17.005: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 10 23:19:17.006: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3727
STEP: creating service up-down-1 in namespace services-3727
STEP: creating replication controller up-down-1 in namespace services-3727
I0610 23:19:17.017880      29 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3727, replica count: 3
I0610 23:19:20.070693      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:23.070943      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:26.072711      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:29.073773      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:32.073952      29 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3727
STEP: creating service up-down-2 in namespace services-3727
STEP: creating replication controller up-down-2 in namespace services-3727
I0610 23:19:32.086462      29 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3727, replica count: 3
I0610 23:19:35.138232      29 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:38.140914      29 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:19:41.141811      29 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Jun 10 23:19:41.145: INFO: Creating new host exec pod
Jun 10 23:19:41.159: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:43.162: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:45.162: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:19:45.162: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:19:53.180: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.46:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-host-exec-pod
Jun 10 23:19:53.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.46:80 2>&1 || true; echo; done'
Jun 10 23:19:53.693: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n"
Jun 10 23:19:53.694: INFO: stdout: "up-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\n"
Jun 10 23:19:53.694: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.46:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-exec-pod-drcw6
Jun 10 23:19:53.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-exec-pod-drcw6 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.46:80 2>&1 || true; echo; done'
Jun 10 23:19:54.290: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.46:80\n+ echo\n"
Jun 10 23:19:54.290: INFO: stdout: "up-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-sb4wl\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-zzdnm\nup-down-1-sb4wl\nup-down-1-zzdnm\nup-down-1-4l97b\nup-down-1-4l97b\nup-down-1-4l97b\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3727
STEP: Deleting pod verify-service-up-exec-pod-drcw6 in namespace services-3727
STEP: verifying service up-down-2 is up
Jun 10 23:19:54.304: INFO: Creating new host exec pod
Jun 10 23:19:54.315: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:56.318: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:19:58.320: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:00.319: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:02.319: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:04.320: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:06.319: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:08.320: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:10.322: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:12.321: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:20:12.322: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:20:22.339: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-host-exec-pod
Jun 10 23:20:22.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:20:23.027: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:20:23.027: INFO: stdout: "up-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\n"
Jun 10 23:20:23.027: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-exec-pod-ptkts
Jun 10 23:20:23.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-exec-pod-ptkts -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:20:23.574: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:20:23.575: INFO: stdout: "up-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3727
STEP: Deleting pod verify-service-up-exec-pod-ptkts in namespace services-3727
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3727, will wait for the garbage collector to delete the pods
Jun 10 23:20:23.646: INFO: Deleting ReplicationController up-down-1 took: 3.866081ms
Jun 10 23:20:23.746: INFO: Terminating ReplicationController up-down-1 pods took: 100.406704ms
STEP: verifying service up-down-1 is not up
Jun 10 23:20:33.755: INFO: Creating new host exec pod
Jun 10 23:20:33.770: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:35.773: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:37.774: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:39.773: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:20:39.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.46:80 && echo service-down-failed'
Jun 10 23:20:42.135: INFO: rc: 28
Jun 10 23:20:42.135: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.4.46:80 && echo service-down-failed" in pod services-3727/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.46:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.4.46:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3727
STEP: verifying service up-down-2 is still up
Jun 10 23:20:42.142: INFO: Creating new host exec pod
Jun 10 23:20:42.176: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:44.180: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:46.180: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:48.181: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:50.179: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:20:50.179: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:20:56.198: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-host-exec-pod
Jun 10 23:20:56.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:20:56.693: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:20:56.694: INFO: stdout: "up-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\n"
Jun 10 23:20:56.694: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-exec-pod-xhjb4
Jun 10 23:20:56.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-exec-pod-xhjb4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:20:57.110: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:20:57.111: INFO: stdout: "up-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3727
STEP: Deleting pod verify-service-up-exec-pod-xhjb4 in namespace services-3727
STEP: creating service up-down-3 in namespace services-3727
STEP: creating service up-down-3 in namespace services-3727
STEP: creating replication controller up-down-3 in namespace services-3727
I0610 23:20:57.132808      29 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3727, replica count: 3
I0610 23:21:00.184284      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:03.185639      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:06.187793      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:09.188998      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:21:12.189996      29 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Jun 10 23:21:12.192: INFO: Creating new host exec pod
Jun 10 23:21:12.204: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:14.208: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:16.209: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:21:16.209: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:21:22.224: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-host-exec-pod
Jun 10 23:21:22.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:21:22.702: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:21:22.702: INFO: stdout: "up-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\n"
Jun 10 23:21:22.702: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-exec-pod-n8ptl
Jun 10 23:21:22.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-exec-pod-n8ptl -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.63.227:80 2>&1 || true; echo; done'
Jun 10 23:21:23.583: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.63.227:80\n+ echo\n"
Jun 10 23:21:23.583: INFO: stdout: "up-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-lsjk5\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-lsjk5\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-65b4b\nup-down-2-65b4b\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-bqlf7\nup-down-2-lsjk5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3727
STEP: Deleting pod verify-service-up-exec-pod-n8ptl in namespace services-3727
STEP: verifying service up-down-3 is up
Jun 10 23:21:23.597: INFO: Creating new host exec pod
Jun 10 23:21:23.610: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:25.614: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:21:25.614: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:21:31.631: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.18:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-host-exec-pod
Jun 10 23:21:31.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.18:80 2>&1 || true; echo; done'
Jun 10 23:21:32.037: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n"
Jun 10 23:21:32.037: INFO: stdout: "up-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\n"
Jun 10 23:21:32.037: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.18:80 2>&1 || true; echo; done" in pod services-3727/verify-service-up-exec-pod-kwkz4
Jun 10 23:21:32.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3727 exec verify-service-up-exec-pod-kwkz4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.18:80 2>&1 || true; echo; done'
Jun 10 23:21:32.457: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.18:80\n+ echo\n"
Jun 10 23:21:32.458: INFO: stdout: "up-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-djpgv\nup-down-3-kf6tj\nup-down-3-zt6tz\nup-down-3-kf6tj\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3727
STEP: Deleting pod verify-service-up-exec-pod-kwkz4 in namespace services-3727
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:32.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3727" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:135.495 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:21:05.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-8579
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:21:05.614: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:05.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:07.652: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:09.651: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:11.650: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:13.652: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:15.652: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:17.652: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:19.651: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:21.651: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:23.652: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:25.651: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:21:25.655: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:21:27.659: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:21:33.698: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:21:33.698: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:33.704: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:33.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8579" for this suite.


S [SKIPPING] [28.213 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 10 23:21:33.716: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:21:04.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-4159
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:21:04.682: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:04.717: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:06.720: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:08.719: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:10.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:12.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:14.719: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:16.722: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:18.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:20.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:22.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:24.720: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:21:24.724: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 10 23:21:26.728: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:21:36.749: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:21:36.749: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:36.756: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:36.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4159" for this suite.


S [SKIPPING] [32.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 10 23:21:36.769: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:21:21.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-862
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 10 23:21:21.224: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:21.257: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:23.262: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:25.261: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:27.261: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:29.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:31.260: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:33.262: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:35.262: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:37.261: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:39.265: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:41.261: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 10 23:21:43.262: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 10 23:21:43.267: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 10 23:21:47.292: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 10 23:21:47.292: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 10 23:21:47.299: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:47.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-862" for this suite.


S [SKIPPING] [26.243 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 10 23:21:47.314: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:35.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-3861
STEP: creating a client pod for probing the service svc-udp
Jun 10 23:20:35.690: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:37.693: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:39.693: INFO: The status of Pod pod-client is Running (Ready = true)
Jun 10 23:20:39.703: INFO: Pod client logs: Fri Jun 10 23:20:38 UTC 2022
Fri Jun 10 23:20:38 UTC 2022 Try: 1

Fri Jun 10 23:20:38 UTC 2022 Try: 2

Fri Jun 10 23:20:38 UTC 2022 Try: 3

Fri Jun 10 23:20:38 UTC 2022 Try: 4

Fri Jun 10 23:20:38 UTC 2022 Try: 5

Fri Jun 10 23:20:38 UTC 2022 Try: 6

Fri Jun 10 23:20:38 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Jun 10 23:20:39.718: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:41.723: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:43.723: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:45.722: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:47.722: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:49.722: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3861 to expose endpoints map[pod-server-1:[80]]
Jun 10 23:20:49.733: INFO: successfully validated that service svc-udp in namespace conntrack-3861 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Jun 10 23:21:49.761: INFO: Pod client logs: Fri Jun 10 23:20:38 UTC 2022
Fri Jun 10 23:20:38 UTC 2022 Try: 1

Fri Jun 10 23:20:38 UTC 2022 Try: 2

Fri Jun 10 23:20:38 UTC 2022 Try: 3

Fri Jun 10 23:20:38 UTC 2022 Try: 4

Fri Jun 10 23:20:38 UTC 2022 Try: 5

Fri Jun 10 23:20:38 UTC 2022 Try: 6

Fri Jun 10 23:20:38 UTC 2022 Try: 7

Fri Jun 10 23:20:43 UTC 2022 Try: 8

Fri Jun 10 23:20:43 UTC 2022 Try: 9

Fri Jun 10 23:20:43 UTC 2022 Try: 10

Fri Jun 10 23:20:43 UTC 2022 Try: 11

Fri Jun 10 23:20:43 UTC 2022 Try: 12

Fri Jun 10 23:20:43 UTC 2022 Try: 13

Fri Jun 10 23:20:48 UTC 2022 Try: 14

Fri Jun 10 23:20:48 UTC 2022 Try: 15

Fri Jun 10 23:20:48 UTC 2022 Try: 16

Fri Jun 10 23:20:48 UTC 2022 Try: 17

Fri Jun 10 23:20:48 UTC 2022 Try: 18

Fri Jun 10 23:20:48 UTC 2022 Try: 19

Fri Jun 10 23:20:53 UTC 2022 Try: 20

Fri Jun 10 23:20:53 UTC 2022 Try: 21

Fri Jun 10 23:20:53 UTC 2022 Try: 22

Fri Jun 10 23:20:53 UTC 2022 Try: 23

Fri Jun 10 23:20:53 UTC 2022 Try: 24

Fri Jun 10 23:20:53 UTC 2022 Try: 25

Fri Jun 10 23:20:58 UTC 2022 Try: 26

Fri Jun 10 23:20:58 UTC 2022 Try: 27

Fri Jun 10 23:20:58 UTC 2022 Try: 28

Fri Jun 10 23:20:58 UTC 2022 Try: 29

Fri Jun 10 23:20:58 UTC 2022 Try: 30

Fri Jun 10 23:20:58 UTC 2022 Try: 31

Fri Jun 10 23:21:03 UTC 2022 Try: 32

Fri Jun 10 23:21:03 UTC 2022 Try: 33

Fri Jun 10 23:21:03 UTC 2022 Try: 34

Fri Jun 10 23:21:03 UTC 2022 Try: 35

Fri Jun 10 23:21:03 UTC 2022 Try: 36

Fri Jun 10 23:21:03 UTC 2022 Try: 37

Fri Jun 10 23:21:08 UTC 2022 Try: 38

Fri Jun 10 23:21:08 UTC 2022 Try: 39

Fri Jun 10 23:21:08 UTC 2022 Try: 40

Fri Jun 10 23:21:08 UTC 2022 Try: 41

Fri Jun 10 23:21:08 UTC 2022 Try: 42

Fri Jun 10 23:21:08 UTC 2022 Try: 43

Fri Jun 10 23:21:13 UTC 2022 Try: 44

Fri Jun 10 23:21:13 UTC 2022 Try: 45

Fri Jun 10 23:21:13 UTC 2022 Try: 46

Fri Jun 10 23:21:13 UTC 2022 Try: 47

Fri Jun 10 23:21:13 UTC 2022 Try: 48

Fri Jun 10 23:21:13 UTC 2022 Try: 49

Fri Jun 10 23:21:18 UTC 2022 Try: 50

Fri Jun 10 23:21:18 UTC 2022 Try: 51

Fri Jun 10 23:21:18 UTC 2022 Try: 52

Fri Jun 10 23:21:18 UTC 2022 Try: 53

Fri Jun 10 23:21:18 UTC 2022 Try: 54

Fri Jun 10 23:21:18 UTC 2022 Try: 55

Fri Jun 10 23:21:23 UTC 2022 Try: 56

Fri Jun 10 23:21:23 UTC 2022 Try: 57

Fri Jun 10 23:21:23 UTC 2022 Try: 58

Fri Jun 10 23:21:23 UTC 2022 Try: 59

Fri Jun 10 23:21:23 UTC 2022 Try: 60

Fri Jun 10 23:21:23 UTC 2022 Try: 61

Fri Jun 10 23:21:28 UTC 2022 Try: 62

Fri Jun 10 23:21:28 UTC 2022 Try: 63

Fri Jun 10 23:21:28 UTC 2022 Try: 64

Fri Jun 10 23:21:28 UTC 2022 Try: 65

Fri Jun 10 23:21:28 UTC 2022 Try: 66

Fri Jun 10 23:21:28 UTC 2022 Try: 67

Fri Jun 10 23:21:33 UTC 2022 Try: 68

Fri Jun 10 23:21:33 UTC 2022 Try: 69

Fri Jun 10 23:21:33 UTC 2022 Try: 70

Fri Jun 10 23:21:33 UTC 2022 Try: 71

Fri Jun 10 23:21:33 UTC 2022 Try: 72

Fri Jun 10 23:21:33 UTC 2022 Try: 73

Fri Jun 10 23:21:38 UTC 2022 Try: 74

Fri Jun 10 23:21:38 UTC 2022 Try: 75

Fri Jun 10 23:21:38 UTC 2022 Try: 76

Fri Jun 10 23:21:38 UTC 2022 Try: 77

Fri Jun 10 23:21:38 UTC 2022 Try: 78

Fri Jun 10 23:21:38 UTC 2022 Try: 79

Fri Jun 10 23:21:43 UTC 2022 Try: 80

Fri Jun 10 23:21:43 UTC 2022 Try: 81

Fri Jun 10 23:21:43 UTC 2022 Try: 82

Fri Jun 10 23:21:43 UTC 2022 Try: 83

Fri Jun 10 23:21:43 UTC 2022 Try: 84

Fri Jun 10 23:21:43 UTC 2022 Try: 85

Fri Jun 10 23:21:48 UTC 2022 Try: 86

Fri Jun 10 23:21:48 UTC 2022 Try: 87

Fri Jun 10 23:21:48 UTC 2022 Try: 88

Fri Jun 10 23:21:48 UTC 2022 Try: 89

Fri Jun 10 23:21:48 UTC 2022 Try: 90

Fri Jun 10 23:21:48 UTC 2022 Try: 91

Jun 10 23:21:49.761: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00148f980)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00148f980)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00148f980, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-3861".
STEP: Found 8 events.
Jun 10 23:21:49.765: INFO: At 2022-06-10 23:20:38 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:38 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 299.529698ms
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:38 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:38 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:43 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 482.402552ms
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Jun 10 23:21:49.766: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Jun 10 23:21:49.768: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Jun 10 23:21:49.768: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:35 +0000 UTC  }]
Jun 10 23:21:49.768: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:39 +0000 UTC  }]
Jun 10 23:21:49.768: INFO: 
Jun 10 23:21:49.773: INFO: 
Logging node info for node master1
Jun 10 23:21:49.775: INFO: Node Info: &Node{ObjectMeta:{master1    e472448e-87fd-4e8d-bbb7-98d43d3d8a87 75981 0 2022-06-10 19:57:38 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:21:49.776: INFO: 
Logging kubelet events for node master1
Jun 10 23:21:49.778: INFO: 
Logging pods the kubelet thinks is on node master1
Jun 10 23:21:49.804: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-proxy ready: true, restart count 3
Jun 10 23:21:49.804: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container docker-registry ready: true, restart count 0
Jun 10 23:21:49.804: INFO: 	Container nginx ready: true, restart count 0
Jun 10 23:21:49.804: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container nfd-controller ready: true, restart count 0
Jun 10 23:21:49.804: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:49.804: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun 10 23:21:49.804: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:49.804: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:21:49.804: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:21:49.804: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 10 23:21:49.804: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun 10 23:21:49.804: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Init container install-cni ready: true, restart count 0
Jun 10 23:21:49.804: INFO: 	Container kube-flannel ready: true, restart count 1
Jun 10 23:21:49.804: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:21:49.804: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.804: INFO: 	Container autoscaler ready: true, restart count 1
Jun 10 23:21:49.903: INFO: 
Latency metrics for node master1
Jun 10 23:21:49.903: INFO: 
Logging node info for node master2
Jun 10 23:21:49.906: INFO: Node Info: &Node{ObjectMeta:{master2    66c7af40-c8de-462b-933d-792f10a44a43 75975 0 2022-06-10 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:21:44 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:21:49.906: INFO: 
Logging kubelet events for node master2
Jun 10 23:21:49.908: INFO: 
Logging pods the kubelet thinks is on node master2
Jun 10 23:21:49.932: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.932: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jun 10 23:21:49.932: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.932: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun 10 23:21:49.932: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.932: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 10 23:21:49.933: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:21:49.933: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:21:49.933: INFO: 	Container kube-flannel ready: true, restart count 1
Jun 10 23:21:49.933: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.933: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:21:49.933: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.933: INFO: 	Container coredns ready: true, restart count 1
Jun 10 23:21:49.933: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:49.933: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:21:49.933: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:49.933: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:49.933: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:21:50.012: INFO: 
Latency metrics for node master2
Jun 10 23:21:50.013: INFO: 
Logging node info for node master3
Jun 10 23:21:50.015: INFO: Node Info: &Node{ObjectMeta:{master3    e51505ec-e791-4bbe-aeb1-bd0671fd4464 75914 0 2022-06-10 19:58:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:40 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:21:40 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:21:50.015: INFO: 
Logging kubelet events for node master3
Jun 10 23:21:50.018: INFO: 
Logging pods the kubelet thinks is on node master3
Jun 10 23:21:50.030: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:21:50.030: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:21:50.030: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:21:50.030: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.030: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 10 23:21:50.030: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 10 23:21:50.031: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container kube-scheduler ready: true, restart count 1
Jun 10 23:21:50.031: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:21:50.031: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container coredns ready: true, restart count 1
Jun 10 23:21:50.031: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:50.031: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:21:50.031: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.031: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:21:50.152: INFO: 
Latency metrics for node master3
Jun 10 23:21:50.152: INFO: 
Logging node info for node node1
Jun 10 23:21:50.154: INFO: Node Info: &Node{ObjectMeta:{node1    fa951133-0317-499e-8a0a-fc7a0636a371 75918 0 2022-06-10 19:59:19 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:41 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:41 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:41 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:21:41 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:21:50.155: INFO: 
Logging kubelet events for node node1
Jun 10 23:21:50.157: INFO: 
Logging pods the kubelet thinks is on node node1
Jun 10 23:21:50.174: INFO: service-proxy-disabled-rhf5t started at 2022-06-10 23:20:27 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun 10 23:21:50.174: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container discover ready: false, restart count 0
Jun 10 23:21:50.174: INFO: 	Container init ready: false, restart count 0
Jun 10 23:21:50.174: INFO: 	Container install ready: false, restart count 0
Jun 10 23:21:50.174: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container config-reloader ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container grafana ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container prometheus ready: true, restart count 1
Jun 10 23:21:50.174: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:21:50.174: INFO: execpodvq78m started at 2022-06-10 23:20:50 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 10 23:21:50.174: INFO: netserver-0 started at 2022-06-10 23:21:04 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container webserver ready: false, restart count 0
Jun 10 23:21:50.174: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 10 23:21:50.174: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 10 23:21:50.174: INFO: service-proxy-disabled-5g8xb started at 2022-06-10 23:20:27 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun 10 23:21:50.174: INFO: e2e-net-exec started at 2022-06-10 23:20:54 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container e2e-net-exec ready: true, restart count 0
Jun 10 23:21:50.174: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun 10 23:21:50.174: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container collectd ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 10 23:21:50.174: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container nodereport ready: true, restart count 0
Jun 10 23:21:50.174: INFO: 	Container reconcile ready: true, restart count 0
Jun 10 23:21:50.174: INFO: netserver-0 started at 2022-06-10 23:21:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container webserver ready: true, restart count 0
Jun 10 23:21:50.174: INFO: service-proxy-toggled-vs72d started at 2022-06-10 23:20:39 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Jun 10 23:21:50.174: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 10 23:21:50.174: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:21:50.174: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container tas-extender ready: true, restart count 0
Jun 10 23:21:50.174: INFO: pod-client started at 2022-06-10 23:20:35 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container pod-client ready: true, restart count 0
Jun 10 23:21:50.174: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 10 23:21:50.174: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:21:50.174: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:21:50.174: INFO: nodeport-update-service-klb7d started at 2022-06-10 23:20:41 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.174: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 10 23:21:50.417: INFO: 
Latency metrics for node node1
Jun 10 23:21:50.417: INFO: 
Logging node info for node node2
Jun 10 23:21:50.421: INFO: Node Info: &Node{ObjectMeta:{node2    e3ba5b73-7a35-4d3f-9138-31db06c90dc3 76016 0 2022-06-10 19:59:19 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-10 22:43:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:49 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:49 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:21:49 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:21:49 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:21:50.422: INFO: 
Logging kubelet events for node node2
Jun 10 23:21:50.425: INFO: 
Logging pods the kubelet thinks is on node node2
Jun 10 23:21:50.448: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 10 23:21:50.448: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:21:50.448: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:21:50.448: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container discover ready: false, restart count 0
Jun 10 23:21:50.448: INFO: 	Container init ready: false, restart count 0
Jun 10 23:21:50.448: INFO: 	Container install ready: false, restart count 0
Jun 10 23:21:50.448: INFO: verify-service-up-exec-pod-hvtxn started at 2022-06-10 23:21:46 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 10 23:21:50.448: INFO: test-container-pod started at 2022-06-10 23:21:26 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container webserver ready: false, restart count 0
Jun 10 23:21:50.448: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 10 23:21:50.448: INFO: pod-server-1 started at 2022-06-10 23:20:39 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 10 23:21:50.448: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container collectd ready: true, restart count 0
Jun 10 23:21:50.448: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 10 23:21:50.448: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 10 23:21:50.448: INFO: verify-service-up-host-exec-pod started at 2022-06-10 23:21:42 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 10 23:21:50.448: INFO: service-proxy-disabled-xsxzl started at 2022-06-10 23:20:28 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun 10 23:21:50.448: INFO: nodeport-update-service-8ph5w started at 2022-06-10 23:20:41 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 10 23:21:50.448: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun 10 23:21:50.448: INFO: test-container-pod started at 2022-06-10 23:21:43 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container webserver ready: true, restart count 0
Jun 10 23:21:50.448: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:21:50.448: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:21:50.448: INFO: netserver-1 started at 2022-06-10 23:21:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container webserver ready: true, restart count 0
Jun 10 23:21:50.448: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 10 23:21:50.448: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:21:50.448: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Jun 10 23:21:50.448: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container nodereport ready: true, restart count 0
Jun 10 23:21:50.448: INFO: 	Container reconcile ready: true, restart count 0
Jun 10 23:21:50.448: INFO: service-proxy-toggled-lhqb9 started at 2022-06-10 23:20:39 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Jun 10 23:21:50.448: INFO: netserver-1 started at 2022-06-10 23:21:04 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container webserver ready: false, restart count 0
Jun 10 23:21:50.448: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 10 23:21:50.448: INFO: service-proxy-toggled-82k4x started at 2022-06-10 23:20:39 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:21:50.448: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Jun 10 23:21:50.722: INFO: 
Latency metrics for node node2
Jun 10 23:21:50.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-3861" for this suite.


• Failure [75.093 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Jun 10 23:21:49.761: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":347,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Jun 10 23:21:50.736: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:27.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-2651
STEP: creating service service-proxy-disabled in namespace services-2651
STEP: creating replication controller service-proxy-disabled in namespace services-2651
I0610 23:20:27.102194      32 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-2651, replica count: 3
I0610 23:20:30.154037      32 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:33.157156      32 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:36.157762      32 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:39.159377      32 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-2651
STEP: creating service service-proxy-toggled in namespace services-2651
STEP: creating replication controller service-proxy-toggled in namespace services-2651
I0610 23:20:39.174696      32 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-2651, replica count: 3
I0610 23:20:42.225549      32 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:45.226414      32 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:48.228601      32 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:51.229392      32 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun 10 23:20:51.231: INFO: Creating new host exec pod
Jun 10 23:20:51.245: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:53.249: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:20:55.250: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:20:55.250: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:21:09.267: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done" in pod services-2651/verify-service-up-host-exec-pod
Jun 10 23:21:09.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done'
Jun 10 23:21:11.133: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n"
Jun 10 23:21:11.133: INFO: stdout: "service-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\n"
Jun 10 23:21:11.134: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done" in pod services-2651/verify-service-up-exec-pod-wdvrx
Jun 10 23:21:11.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-up-exec-pod-wdvrx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done'
Jun 10 23:21:11.560: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n"
Jun 10 23:21:11.560: INFO: stdout: "service-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2651
STEP: Deleting pod verify-service-up-exec-pod-wdvrx in namespace services-2651
STEP: verifying service-disabled is not up
Jun 10 23:21:11.579: INFO: Creating new host exec pod
Jun 10 23:21:11.593: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:13.596: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:15.597: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:17.599: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:19.596: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:21:19.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed'
Jun 10 23:21:21.845: INFO: rc: 28
Jun 10 23:21:21.845: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed" in pod services-2651/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.8.235:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2651
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun 10 23:21:21.860: INFO: Creating new host exec pod
Jun 10 23:21:21.873: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:23.877: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:25.875: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:27.878: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:29.876: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:31.876: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:33.878: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:35.878: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:37.878: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:39.876: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:21:39.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.35.24:80 && echo service-down-failed'
Jun 10 23:21:42.216: INFO: rc: 28
Jun 10 23:21:42.216: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.35.24:80 && echo service-down-failed" in pod services-2651/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.35.24:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.35.24:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2651
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun 10 23:21:42.231: INFO: Creating new host exec pod
Jun 10 23:21:42.242: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:44.247: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:46.245: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 10 23:21:46.245: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 10 23:21:50.263: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done" in pod services-2651/verify-service-up-host-exec-pod
Jun 10 23:21:50.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done'
Jun 10 23:21:50.637: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n"
Jun 10 23:21:50.637: INFO: stdout: "service-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\n"
Jun 10 23:21:50.638: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done" in pod services-2651/verify-service-up-exec-pod-hvtxn
Jun 10 23:21:50.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-up-exec-pod-hvtxn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.35.24:80 2>&1 || true; echo; done'
Jun 10 23:21:51.036: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.35.24:80\n+ echo\n"
Jun 10 23:21:51.037: INFO: stdout: "service-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-lhqb9\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-vs72d\nservice-proxy-toggled-82k4x\nservice-proxy-toggled-82k4x\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2651
STEP: Deleting pod verify-service-up-exec-pod-hvtxn in namespace services-2651
STEP: verifying service-disabled is still not up
Jun 10 23:21:51.053: INFO: Creating new host exec pod
Jun 10 23:21:51.065: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 10 23:21:53.069: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 10 23:21:53.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed'
Jun 10 23:21:55.313: INFO: rc: 28
Jun 10 23:21:55.313: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed" in pod services-2651/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2651 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.8.235:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.8.235:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2651
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 10 23:21:55.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2651" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:88.263 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":3,"skipped":671,"failed":0}
Jun 10 23:21:55.335: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 10 23:20:41.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-4578
Jun 10 23:20:41.262: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-4578
I0610 23:20:41.274807      35 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-4578, replica count: 2
I0610 23:20:44.326601      35 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:47.327811      35 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0610 23:20:50.328770      35 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 10 23:20:50.328: INFO: Creating new exec pod
Jun 10 23:20:57.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jun 10 23:20:57.624: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Jun 10 23:20:57.624: INFO: stdout: "nodeport-update-service-klb7d"
Jun 10 23:20:57.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.2.170 80'
Jun 10 23:20:58.305: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.2.170 80\nConnection to 10.233.2.170 80 port [tcp/http] succeeded!\n"
Jun 10 23:20:58.305: INFO: stdout: "nodeport-update-service-klb7d"
Jun 10 23:20:58.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:20:59.152: INFO: rc: 1
Jun 10 23:20:59.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:00.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:00.559: INFO: rc: 1
Jun 10 23:21:00.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:01.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:01.407: INFO: rc: 1
Jun 10 23:21:01.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:02.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:02.426: INFO: rc: 1
Jun 10 23:21:02.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:03.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:03.406: INFO: rc: 1
Jun 10 23:21:03.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:04.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:04.398: INFO: rc: 1
Jun 10 23:21:04.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:05.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:05.424: INFO: rc: 1
Jun 10 23:21:05.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
+ echo hostName
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:06.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:06.497: INFO: rc: 1
Jun 10 23:21:06.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:07.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:08.013: INFO: rc: 1
Jun 10 23:21:08.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:08.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:08.455: INFO: rc: 1
Jun 10 23:21:08.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:09.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:12.322: INFO: rc: 1
Jun 10 23:21:12.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:13.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:13.807: INFO: rc: 1
Jun 10 23:21:13.807: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:14.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:14.729: INFO: rc: 1
Jun 10 23:21:14.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:15.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:15.917: INFO: rc: 1
Jun 10 23:21:15.918: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ + echonc -v hostName
 -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:16.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:16.652: INFO: rc: 1
Jun 10 23:21:16.652: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:17.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:17.638: INFO: rc: 1
Jun 10 23:21:17.638: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:18.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:18.621: INFO: rc: 1
Jun 10 23:21:18.621: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:19.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:19.449: INFO: rc: 1
Jun 10 23:21:19.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:20.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:20.400: INFO: rc: 1
Jun 10 23:21:20.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:21.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:21.418: INFO: rc: 1
Jun 10 23:21:21.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:22.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:22.655: INFO: rc: 1
Jun 10 23:21:22.655: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:23.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:23.414: INFO: rc: 1
Jun 10 23:21:23.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:24.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:24.430: INFO: rc: 1
Jun 10 23:21:24.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
+ echo hostName
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:25.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:25.417: INFO: rc: 1
Jun 10 23:21:25.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:26.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:26.571: INFO: rc: 1
Jun 10 23:21:26.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:27.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:27.472: INFO: rc: 1
Jun 10 23:21:27.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:28.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:28.586: INFO: rc: 1
Jun 10 23:21:28.586: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:29.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:29.738: INFO: rc: 1
Jun 10 23:21:29.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:30.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:30.704: INFO: rc: 1
Jun 10 23:21:30.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:31.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:31.404: INFO: rc: 1
Jun 10 23:21:31.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:32.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:32.404: INFO: rc: 1
Jun 10 23:21:32.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:33.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:33.405: INFO: rc: 1
Jun 10 23:21:33.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:34.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:34.397: INFO: rc: 1
Jun 10 23:21:34.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:35.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:35.414: INFO: rc: 1
Jun 10 23:21:35.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:36.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:36.396: INFO: rc: 1
Jun 10 23:21:36.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:37.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:37.440: INFO: rc: 1
Jun 10 23:21:37.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:38.420: INFO: rc: 1
Jun 10 23:21:38.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:39.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:39.483: INFO: rc: 1
Jun 10 23:21:39.483: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:40.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:40.405: INFO: rc: 1
Jun 10 23:21:40.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:41.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:41.426: INFO: rc: 1
Jun 10 23:21:41.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:42.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:42.514: INFO: rc: 1
Jun 10 23:21:42.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:43.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:43.389: INFO: rc: 1
Jun 10 23:21:43.389: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:44.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:44.403: INFO: rc: 1
Jun 10 23:21:44.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:45.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:45.397: INFO: rc: 1
Jun 10 23:21:45.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:46.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:46.396: INFO: rc: 1
Jun 10 23:21:46.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:47.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:47.398: INFO: rc: 1
Jun 10 23:21:47.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:48.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:48.428: INFO: rc: 1
Jun 10 23:21:48.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:49.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:49.425: INFO: rc: 1
Jun 10 23:21:49.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:50.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:50.408: INFO: rc: 1
Jun 10 23:21:50.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:51.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:51.494: INFO: rc: 1
Jun 10 23:21:51.495: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:52.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:52.413: INFO: rc: 1
Jun 10 23:21:52.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:53.400: INFO: rc: 1
Jun 10 23:21:53.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:54.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:54.385: INFO: rc: 1
Jun 10 23:21:54.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:55.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:55.395: INFO: rc: 1
Jun 10 23:21:55.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:56.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:56.422: INFO: rc: 1
Jun 10 23:21:56.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:57.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:57.396: INFO: rc: 1
Jun 10 23:21:57.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
+ echo hostName
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:58.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:58.540: INFO: rc: 1
Jun 10 23:21:58.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:21:59.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:21:59.404: INFO: rc: 1
Jun 10 23:21:59.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
+ echo hostName
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:00.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:00.432: INFO: rc: 1
Jun 10 23:22:00.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:01.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:01.402: INFO: rc: 1
Jun 10 23:22:01.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:02.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:02.383: INFO: rc: 1
Jun 10 23:22:02.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:03.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:03.413: INFO: rc: 1
Jun 10 23:22:03.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:04.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:04.615: INFO: rc: 1
Jun 10 23:22:04.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:05.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:05.395: INFO: rc: 1
Jun 10 23:22:05.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:06.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:06.395: INFO: rc: 1
Jun 10 23:22:06.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:07.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:07.392: INFO: rc: 1
Jun 10 23:22:07.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:08.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:08.396: INFO: rc: 1
Jun 10 23:22:08.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:09.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:09.407: INFO: rc: 1
Jun 10 23:22:09.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:10.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:10.398: INFO: rc: 1
Jun 10 23:22:10.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:11.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:11.400: INFO: rc: 1
Jun 10 23:22:11.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:12.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:12.405: INFO: rc: 1
Jun 10 23:22:12.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:13.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:13.407: INFO: rc: 1
Jun 10 23:22:13.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:14.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:14.396: INFO: rc: 1
Jun 10 23:22:14.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:15.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:15.398: INFO: rc: 1
Jun 10 23:22:15.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:16.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:16.468: INFO: rc: 1
Jun 10 23:22:16.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:17.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:17.490: INFO: rc: 1
Jun 10 23:22:17.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31060
+ echo hostName
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:18.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:18.418: INFO: rc: 1
Jun 10 23:22:18.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:19.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:19.588: INFO: rc: 1
Jun 10 23:22:19.588: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:20.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:20.609: INFO: rc: 1
Jun 10 23:22:20.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:21.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:21.476: INFO: rc: 1
Jun 10 23:22:21.476: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:22.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:22.400: INFO: rc: 1
Jun 10 23:22:22.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:23.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:24.141: INFO: rc: 1
Jun 10 23:22:24.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:24.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:24.395: INFO: rc: 1
Jun 10 23:22:24.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:25.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:25.402: INFO: rc: 1
Jun 10 23:22:25.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:26.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:26.403: INFO: rc: 1
Jun 10 23:22:26.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:27.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:27.396: INFO: rc: 1
Jun 10 23:22:27.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:28.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:28.497: INFO: rc: 1
Jun 10 23:22:28.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:29.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:29.425: INFO: rc: 1
Jun 10 23:22:29.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:30.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:30.402: INFO: rc: 1
Jun 10 23:22:30.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:31.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:31.419: INFO: rc: 1
Jun 10 23:22:31.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:32.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:32.404: INFO: rc: 1
Jun 10 23:22:32.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:33.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:33.415: INFO: rc: 1
Jun 10 23:22:33.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:34.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:34.400: INFO: rc: 1
Jun 10 23:22:34.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:35.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:35.412: INFO: rc: 1
Jun 10 23:22:35.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:36.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:36.398: INFO: rc: 1
Jun 10 23:22:36.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:37.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:37.403: INFO: rc: 1
Jun 10 23:22:37.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:38.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:38.404: INFO: rc: 1
Jun 10 23:22:38.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:39.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:39.397: INFO: rc: 1
Jun 10 23:22:39.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:40.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:40.418: INFO: rc: 1
Jun 10 23:22:40.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:41.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:41.411: INFO: rc: 1
Jun 10 23:22:41.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:42.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:42.398: INFO: rc: 1
Jun 10 23:22:42.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:43.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:43.392: INFO: rc: 1
Jun 10 23:22:43.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:44.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:44.402: INFO: rc: 1
Jun 10 23:22:44.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:45.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:45.384: INFO: rc: 1
Jun 10 23:22:45.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:46.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:46.405: INFO: rc: 1
Jun 10 23:22:46.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:47.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:47.409: INFO: rc: 1
Jun 10 23:22:47.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:48.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:48.403: INFO: rc: 1
Jun 10 23:22:48.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:49.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:49.429: INFO: rc: 1
Jun 10 23:22:49.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:50.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:50.405: INFO: rc: 1
Jun 10 23:22:50.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:51.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:51.418: INFO: rc: 1
Jun 10 23:22:51.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:52.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:52.409: INFO: rc: 1
Jun 10 23:22:52.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:53.429: INFO: rc: 1
Jun 10 23:22:53.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:54.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:54.409: INFO: rc: 1
Jun 10 23:22:54.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:55.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:55.395: INFO: rc: 1
Jun 10 23:22:55.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:56.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:56.384: INFO: rc: 1
Jun 10 23:22:56.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:57.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:57.395: INFO: rc: 1
Jun 10 23:22:57.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:58.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:58.530: INFO: rc: 1
Jun 10 23:22:58.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:59.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:59.409: INFO: rc: 1
Jun 10 23:22:59.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:59.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060'
Jun 10 23:22:59.673: INFO: rc: 1
Jun 10 23:22:59.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4578 exec execpodvq78m -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31060:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31060
nc: connect to 10.10.190.207 port 31060 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 10 23:22:59.674: FAIL: Unexpected error:
    <*errors.errorString | 0xc0046d6de0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31060 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31060 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000783e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000783e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000783e00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Jun 10 23:22:59.675: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-4578".
STEP: Found 17 events.
Jun 10 23:22:59.702: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodvq78m: { } Scheduled: Successfully assigned services-4578/execpodvq78m to node1
Jun 10 23:22:59.702: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-8ph5w: { } Scheduled: Successfully assigned services-4578/nodeport-update-service-8ph5w to node2
Jun 10 23:22:59.702: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-klb7d: { } Scheduled: Successfully assigned services-4578/nodeport-update-service-klb7d to node1
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:41 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-klb7d
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:41 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-8ph5w
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:43 +0000 UTC - event for nodeport-update-service-klb7d: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for nodeport-update-service-8ph5w: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for nodeport-update-service-8ph5w: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 351.79475ms
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for nodeport-update-service-klb7d: {kubelet node1} Started: Started container nodeport-update-service
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for nodeport-update-service-klb7d: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 296.03226ms
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:44 +0000 UTC - event for nodeport-update-service-klb7d: {kubelet node1} Created: Created container nodeport-update-service
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:45 +0000 UTC - event for nodeport-update-service-8ph5w: {kubelet node2} Started: Started container nodeport-update-service
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:45 +0000 UTC - event for nodeport-update-service-8ph5w: {kubelet node2} Created: Created container nodeport-update-service
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:52 +0000 UTC - event for execpodvq78m: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:53 +0000 UTC - event for execpodvq78m: {kubelet node1} Started: Started container agnhost-container
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:53 +0000 UTC - event for execpodvq78m: {kubelet node1} Created: Created container agnhost-container
Jun 10 23:22:59.702: INFO: At 2022-06-10 23:20:53 +0000 UTC - event for execpodvq78m: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 571.757149ms
Jun 10 23:22:59.705: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Jun 10 23:22:59.705: INFO: execpodvq78m                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:50 +0000 UTC  }]
Jun 10 23:22:59.706: INFO: nodeport-update-service-8ph5w  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:45 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:45 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:41 +0000 UTC  }]
Jun 10 23:22:59.706: INFO: nodeport-update-service-klb7d  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-10 23:20:41 +0000 UTC  }]
Jun 10 23:22:59.706: INFO: 
Jun 10 23:22:59.709: INFO: 
Logging node info for node master1
Jun 10 23:22:59.712: INFO: Node Info: &Node{ObjectMeta:{master1    e472448e-87fd-4e8d-bbb7-98d43d3d8a87 76412 0 2022-06-10 19:57:38 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {nfd-master Update v1 2022-06-10 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:55 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:55 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:55 +0000 UTC,LastTransitionTime:2022-06-10 19:57:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:22:55 +0000 UTC,LastTransitionTime:2022-06-10 20:00:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3faca96dd267476388422e9ecfe8ffa5,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a8563bde-8faa-4424-940f-741c59dd35bf,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:22:59.712: INFO: 
Logging kubelet events for node master1
Jun 10 23:22:59.714: INFO: 
Logging pods the kubelet thinks is on node master1
Jun 10 23:22:59.735: INFO: kube-scheduler-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun 10 23:22:59.735: INFO: kube-proxy-rd4j7 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-proxy ready: true, restart count 3
Jun 10 23:22:59.735: INFO: container-registry-65d7c44b96-rsh2n started at 2022-06-10 20:04:56 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container docker-registry ready: true, restart count 0
Jun 10 23:22:59.735: INFO: 	Container nginx ready: true, restart count 0
Jun 10 23:22:59.735: INFO: node-feature-discovery-controller-cff799f9f-74qhv started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container nfd-controller ready: true, restart count 0
Jun 10 23:22:59.735: INFO: prometheus-operator-585ccfb458-kkb8f started at 2022-06-10 20:13:26 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:22:59.735: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun 10 23:22:59.735: INFO: node-exporter-vc67r started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:22:59.735: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:22:59.735: INFO: kube-apiserver-master1 started at 2022-06-10 19:58:43 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:22:59.735: INFO: kube-controller-manager-master1 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 10 23:22:59.735: INFO: dns-autoscaler-7df78bfcfb-kz7px started at 2022-06-10 20:00:58 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container autoscaler ready: true, restart count 1
Jun 10 23:22:59.735: INFO: kube-flannel-xx9h7 started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Init container install-cni ready: true, restart count 0
Jun 10 23:22:59.735: INFO: 	Container kube-flannel ready: true, restart count 1
Jun 10 23:22:59.735: INFO: kube-multus-ds-amd64-t5pr7 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.735: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:22:59.833: INFO: 
Latency metrics for node master1
Jun 10 23:22:59.834: INFO: 
Logging node info for node master2
Jun 10 23:22:59.837: INFO: Node Info: &Node{ObjectMeta:{master2    66c7af40-c8de-462b-933d-792f10a44a43 76407 0 2022-06-10 19:58:07 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:20 +0000 UTC,LastTransitionTime:2022-06-10 20:03:20 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:54 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:54 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:54 +0000 UTC,LastTransitionTime:2022-06-10 19:58:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:22:54 +0000 UTC,LastTransitionTime:2022-06-10 20:00:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:31687d4b1abb46329a442e068ee56c42,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:e234d452-a6d8-4bf0-b98d-a080613c39e9,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:22:59.837: INFO: 
Logging kubelet events for node master2
Jun 10 23:22:59.840: INFO: 
Logging pods the kubelet thinks is on node master2
Jun 10 23:22:59.848: INFO: kube-apiserver-master2 started at 2022-06-10 19:58:44 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:22:59.848: INFO: node-exporter-6fbrb started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:22:59.848: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:22:59.848: INFO: coredns-8474476ff8-hlspd started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container coredns ready: true, restart count 1
Jun 10 23:22:59.848: INFO: kube-controller-manager-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jun 10 23:22:59.848: INFO: kube-scheduler-master2 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun 10 23:22:59.848: INFO: kube-proxy-2kbvc started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 10 23:22:59.848: INFO: kube-flannel-ftn9l started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:22:59.848: INFO: 	Container kube-flannel ready: true, restart count 1
Jun 10 23:22:59.848: INFO: kube-multus-ds-amd64-nrmqq started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.848: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:22:59.927: INFO: 
Latency metrics for node master2
Jun 10 23:22:59.927: INFO: 
Logging node info for node master3
Jun 10 23:22:59.930: INFO: Node Info: &Node{ObjectMeta:{master3    e51505ec-e791-4bbe-aeb1-bd0671fd4464 76399 0 2022-06-10 19:58:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-10 19:58:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-10 20:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-10 20:10:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:14 +0000 UTC,LastTransitionTime:2022-06-10 20:03:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:58:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1f373495c4c54f68a37fa0d50cd1da58,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a719d949-f9d1-4ee4-a79b-ab3a929b7d00,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:22:59.931: INFO: 
Logging kubelet events for node master3
Jun 10 23:22:59.933: INFO: 
Logging pods the kubelet thinks is on node master3
Jun 10 23:22:59.942: INFO: kube-apiserver-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 10 23:22:59.943: INFO: kube-controller-manager-master3 started at 2022-06-10 20:06:49 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 10 23:22:59.943: INFO: kube-scheduler-master3 started at 2022-06-10 20:03:07 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-scheduler ready: true, restart count 1
Jun 10 23:22:59.943: INFO: kube-multus-ds-amd64-8b4tg started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:22:59.943: INFO: coredns-8474476ff8-s8q89 started at 2022-06-10 20:00:56 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container coredns ready: true, restart count 1
Jun 10 23:22:59.943: INFO: node-exporter-q4rw6 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:22:59.943: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:22:59.943: INFO: kube-proxy-rm9n6 started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 10 23:22:59.943: INFO: kube-flannel-jpd2j started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:22:59.943: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:22:59.943: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:23:00.026: INFO: 
Latency metrics for node master3
Jun 10 23:23:00.026: INFO: 
Logging node info for node node1
Jun 10 23:23:00.029: INFO: Node Info: &Node{ObjectMeta:{node1    fa951133-0317-499e-8a0a-fc7a0636a371 76402 0 2022-06-10 19:59:19 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:13 +0000 UTC,LastTransitionTime:2022-06-10 20:03:13 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:22:51 +0000 UTC,LastTransitionTime:2022-06-10 20:00:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:aabc551d0ffe4cb3b41c0db91649a9a2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:fea48af7-d08f-4093-b808-340d06faf38b,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:bec743bd4fe4525edfd5f3c9bb11da21629092dfe60d396ce7f8168ac1088695 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:23:00.030: INFO: 
Logging kubelet events for node node1
Jun 10 23:23:00.032: INFO: 
Logging pods the kubelet thinks is on node node1
Jun 10 23:23:00.042: INFO: cmk-qjrhs started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container nodereport ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container reconcile ready: true, restart count 0
Jun 10 23:23:00.042: INFO: nginx-proxy-node1 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 10 23:23:00.042: INFO: kube-flannel-x926c started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:23:00.042: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:23:00.042: INFO: kube-multus-ds-amd64-4gckf started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:23:00.042: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn started at 2022-06-10 20:16:40 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container tas-extender ready: true, restart count 0
Jun 10 23:23:00.042: INFO: kube-proxy-5bkrr started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 10 23:23:00.042: INFO: nodeport-update-service-klb7d started at 2022-06-10 23:20:41 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 10 23:23:00.042: INFO: prometheus-k8s-0 started at 2022-06-10 20:13:45 +0000 UTC (0+4 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container config-reloader ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container grafana ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container prometheus ready: true, restart count 1
Jun 10 23:23:00.042: INFO: cmk-init-discover-node1-hlbt6 started at 2022-06-10 20:11:42 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container discover ready: false, restart count 0
Jun 10 23:23:00.042: INFO: 	Container init ready: false, restart count 0
Jun 10 23:23:00.042: INFO: 	Container install ready: false, restart count 0
Jun 10 23:23:00.042: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 10 23:23:00.042: INFO: node-exporter-tk8f9 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:23:00.042: INFO: execpodvq78m started at 2022-06-10 23:20:50 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 10 23:23:00.042: INFO: node-feature-discovery-worker-9xsdt started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 10 23:23:00.042: INFO: collectd-kpj5z started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container collectd ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 10 23:23:00.042: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 10 23:23:00.042: INFO: cmk-webhook-6c9d5f8578-n9w8j started at 2022-06-10 20:12:30 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.042: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun 10 23:23:00.234: INFO: 
Latency metrics for node node1
Jun 10 23:23:00.234: INFO: 
Logging node info for node node2
Jun 10 23:23:00.237: INFO: Node Info: &Node{ObjectMeta:{node2    e3ba5b73-7a35-4d3f-9138-31db06c90dc3 76422 0 2022-06-10 19:59:19 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-10 19:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-10 19:59:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-10 20:00:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-10 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-10 20:12:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-10 22:28:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-10 22:43:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-10 20:03:16 +0000 UTC,LastTransitionTime:2022-06-10 20:03:16 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-10 23:23:00 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-10 23:23:00 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-10 23:23:00 +0000 UTC,LastTransitionTime:2022-06-10 19:59:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-10 23:23:00 +0000 UTC,LastTransitionTime:2022-06-10 20:00:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bb5fb4a83f9949939cd41b7583e9b343,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:bd9c2046-c9ae-4b83-a147-c07e3487254e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:fa61e6e6fee0a4d296013d2993a9ff5538ff0b2e232e6b9c661a6604d93ce888 localhost:30500/cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727708945,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:73408b8d6699bf382b8f7526b6d0a986fad0f037440cd9aabd8985a7e1dbea07 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 10 23:23:00.238: INFO: 
Logging kubelet events for node node2
Jun 10 23:23:00.240: INFO: 
Logging pods the kubelet thinks is on node node2
Jun 10 23:23:00.251: INFO: node-exporter-trpg7 started at 2022-06-10 20:13:33 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 10 23:23:00.251: INFO: 	Container node-exporter ready: true, restart count 0
Jun 10 23:23:00.251: INFO: nginx-proxy-node2 started at 2022-06-10 19:59:19 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 10 23:23:00.251: INFO: kube-multus-ds-amd64-nj866 started at 2022-06-10 20:00:29 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kube-multus ready: true, restart count 1
Jun 10 23:23:00.251: INFO: kubernetes-dashboard-785dcbb76d-7pmgn started at 2022-06-10 20:01:00 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Jun 10 23:23:00.251: INFO: cmk-zpstc started at 2022-06-10 20:12:29 +0000 UTC (0+2 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container nodereport ready: true, restart count 0
Jun 10 23:23:00.251: INFO: 	Container reconcile ready: true, restart count 0
Jun 10 23:23:00.251: INFO: node-feature-discovery-worker-s9mwk started at 2022-06-10 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 10 23:23:00.251: INFO: kube-proxy-4clxz started at 2022-06-10 19:59:24 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 10 23:23:00.251: INFO: kube-flannel-8jl6m started at 2022-06-10 20:00:20 +0000 UTC (1+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Init container install-cni ready: true, restart count 2
Jun 10 23:23:00.251: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 10 23:23:00.251: INFO: cmk-init-discover-node2-jxvbr started at 2022-06-10 20:12:04 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container discover ready: false, restart count 0
Jun 10 23:23:00.251: INFO: 	Container init ready: false, restart count 0
Jun 10 23:23:00.251: INFO: 	Container install ready: false, restart count 0
Jun 10 23:23:00.251: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 started at 2022-06-10 20:09:21 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 10 23:23:00.251: INFO: collectd-srmjh started at 2022-06-10 20:17:30 +0000 UTC (0+3 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container collectd ready: true, restart count 0
Jun 10 23:23:00.251: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 10 23:23:00.251: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 10 23:23:00.251: INFO: nodeport-update-service-8ph5w started at 2022-06-10 23:20:41 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 10 23:23:00.251: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn started at 2022-06-10 20:01:01 +0000 UTC (0+1 container statuses recorded)
Jun 10 23:23:00.251: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun 10 23:23:00.415: INFO: 
Latency metrics for node node2
Jun 10 23:23:00.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4578" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [139.189 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Jun 10 23:22:59.674: Unexpected error:
      <*errors.errorString | 0xc0046d6de0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31060 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31060 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":698,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Jun 10 23:23:00.432: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":142,"failed":0}
Jun 10 23:21:32.486: INFO: Running AfterSuite actions on all nodes
Jun 10 23:23:00.493: INFO: Running AfterSuite actions on node 1
Jun 10 23:23:00.493: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 26 of 5773 Specs in 224.337 seconds
FAIL! -- 24 Passed | 2 Failed | 0 Pending | 5747 Skipped


Ginkgo ran 1 suite in 3m46.061501307s
Test Suite Failed