Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654298046 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 3 23:14:08.259: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.267: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 23:14:08.293: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:14:08.356: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:14:08.357: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:14:08.357: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:14:08.357: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:14:08.357: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 23:14:08.374: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 3 23:14:08.374: INFO: e2e test version: v1.21.9 Jun 3 23:14:08.375: INFO: kube-apiserver version: v1.21.1 Jun 3 23:14:08.376: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.383: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Jun 3 23:14:08.378: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.399: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Jun 3 23:14:08.386: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.409: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 3 23:14:08.388: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.412: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 3 23:14:08.395: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.415: INFO: Cluster IP family: ipv4 SS ------------------------------ Jun 3 23:14:08.391: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.415: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 3 23:14:08.397: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.419: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 3 23:14:08.401: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.422: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Jun 3 23:14:08.421: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.443: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 3 23:14:08.423: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:14:08.446: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:14:08.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W0603 23:14:08.447679 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:14:08.447: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:14:08.449: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Jun 3 23:14:08.469: INFO: (0) /api/v1/nodes/node1/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun  3 23:14:08.940: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:08.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-8566" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:09.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Jun  3 23:14:19.283: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Jun  3 23:14:19.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testbxg22 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Jun  3 23:14:19.603: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Jun  3 23:14:19.603: INFO: stdout: "10.244.4.32:50440"
STEP: Verifying the preserved source ip
Jun  3 23:14:19.603: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Jun  3 23:14:19.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testbxg22 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Jun  3 23:14:19.896: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Jun  3 23:14:19.896: INFO: stdout: "10.244.4.32:41604"
STEP: Verifying the preserved source ip
Jun  3 23:14:19.896: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun  3 23:14:19.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testbxg22 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun  3 23:14:20.474: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun  3 23:14:20.474: INFO: stdout: "10.244.4.32:49568"
STEP: Verifying the preserved source ip
Jun  3 23:14:20.474: INFO: Waiting up to 2m0s to get response from 10.244.3.152:8080
Jun  3 23:14:20.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testbxg22 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip'
Jun  3 23:14:20.725: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip\n"
Jun  3 23:14:20.725: INFO: stdout: "10.244.4.32:51032"
STEP: Verifying the preserved source ip
Jun  3 23:14:20.725: INFO: Waiting up to 2m0s to get response from 10.244.4.32:8080
Jun  3 23:14:20.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testdv29d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip'
Jun  3 23:14:21.013: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip\n"
Jun  3 23:14:21.013: INFO: stdout: "10.244.2.6:48674"
STEP: Verifying the preserved source ip
Jun  3 23:14:21.013: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Jun  3 23:14:21.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testdv29d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Jun  3 23:14:21.280: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Jun  3 23:14:21.280: INFO: stdout: "10.244.2.6:51492"
STEP: Verifying the preserved source ip
Jun  3 23:14:21.280: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun  3 23:14:21.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testdv29d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun  3 23:14:21.529: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun  3 23:14:21.529: INFO: stdout: "10.244.2.6:57234"
STEP: Verifying the preserved source ip
Jun  3 23:14:21.529: INFO: Waiting up to 2m0s to get response from 10.244.3.152:8080
Jun  3 23:14:21.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testdv29d -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip'
Jun  3 23:14:21.801: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip\n"
Jun  3 23:14:21.801: INFO: stdout: "10.244.2.6:39038"
STEP: Verifying the preserved source ip
Jun  3 23:14:21.801: INFO: Waiting up to 2m0s to get response from 10.244.4.32:8080
Jun  3 23:14:21.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testmxkwc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip'
Jun  3 23:14:22.064: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip\n"
Jun  3 23:14:22.064: INFO: stdout: "10.244.1.4:56362"
STEP: Verifying the preserved source ip
Jun  3 23:14:22.064: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Jun  3 23:14:22.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testmxkwc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Jun  3 23:14:22.686: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Jun  3 23:14:22.686: INFO: stdout: "10.244.1.4:60628"
STEP: Verifying the preserved source ip
Jun  3 23:14:22.686: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun  3 23:14:22.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testmxkwc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun  3 23:14:22.920: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun  3 23:14:22.920: INFO: stdout: "10.244.1.4:52698"
STEP: Verifying the preserved source ip
Jun  3 23:14:22.920: INFO: Waiting up to 2m0s to get response from 10.244.3.152:8080
Jun  3 23:14:22.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testmxkwc -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip'
Jun  3 23:14:23.166: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip\n"
Jun  3 23:14:23.166: INFO: stdout: "10.244.1.4:50824"
STEP: Verifying the preserved source ip
Jun  3 23:14:23.166: INFO: Waiting up to 2m0s to get response from 10.244.4.32:8080
Jun  3 23:14:23.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testvdm8v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip'
Jun  3 23:14:23.406: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip\n"
Jun  3 23:14:23.406: INFO: stdout: "10.244.0.10:47172"
STEP: Verifying the preserved source ip
Jun  3 23:14:23.406: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Jun  3 23:14:23.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testvdm8v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Jun  3 23:14:23.647: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Jun  3 23:14:23.648: INFO: stdout: "10.244.0.10:38390"
STEP: Verifying the preserved source ip
Jun  3 23:14:23.648: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Jun  3 23:14:23.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testvdm8v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Jun  3 23:14:23.891: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Jun  3 23:14:23.891: INFO: stdout: "10.244.0.10:33566"
STEP: Verifying the preserved source ip
Jun  3 23:14:23.891: INFO: Waiting up to 2m0s to get response from 10.244.3.152:8080
Jun  3 23:14:23.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testvdm8v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip'
Jun  3 23:14:24.147: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.152:8080/clientip\n"
Jun  3 23:14:24.147: INFO: stdout: "10.244.0.10:58440"
STEP: Verifying the preserved source ip
Jun  3 23:14:24.147: INFO: Waiting up to 2m0s to get response from 10.244.4.32:8080
Jun  3 23:14:24.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testwncqr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip'
Jun  3 23:14:24.400: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.32:8080/clientip\n"
Jun  3 23:14:24.400: INFO: stdout: "10.244.3.152:35064"
STEP: Verifying the preserved source ip
Jun  3 23:14:24.400: INFO: Waiting up to 2m0s to get response from 10.244.2.6:8080
Jun  3 23:14:24.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testwncqr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip'
Jun  3 23:14:24.656: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.6:8080/clientip\n"
Jun  3 23:14:24.656: INFO: stdout: "10.244.3.152:54418"
STEP: Verifying the preserved source ip
Jun  3 23:14:24.656: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Jun  3 23:14:24.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testwncqr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Jun  3 23:14:24.894: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Jun  3 23:14:24.894: INFO: stdout: "10.244.3.152:36772"
STEP: Verifying the preserved source ip
Jun  3 23:14:24.894: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Jun  3 23:14:24.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9669 exec no-snat-testwncqr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Jun  3 23:14:25.165: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Jun  3 23:14:25.165: INFO: stdout: "10.244.3.152:40964"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:25.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-9669" for this suite.


• [SLOW TEST:15.974 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":2,"skipped":271,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
W0603 23:14:08.862295      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.862: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.865: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Jun  3 23:14:08.887: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-78378b83-b31e-4a02-ace2-ed24dfbdbb1b  dns-2310  387a67b6-f67a-41a7-9dcf-933c888b2456 71445 0 2022-06-03 23:14:08 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-03 23:14:08 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-6q9bf,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-z2s9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z2s9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  3 23:14:18.897: INFO: testServerIP is 10.244.3.150
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jun  3 23:14:18.909: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-2310  1214dd03-5600-48cd-89d3-4e86c46b602e 71683 0 2022-06-03 23:14:18 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-03 23:14:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-snpk6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snpk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.150],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Jun  3 23:14:24.920: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-2310 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:24.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Jun  3 23:14:25.028: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-2310 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:25.028: INFO: >>> kubeConfig: /root/.kube/config
Jun  3 23:14:25.159: INFO: Deleting pod e2e-dns-utils...
Jun  3 23:14:25.167: INFO: Deleting pod e2e-configmap-dns-server-78378b83-b31e-4a02-ace2-ed24dfbdbb1b...
Jun  3 23:14:25.173: INFO: Deleting configmap e2e-coredns-configmap-6q9bf...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:25.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2310" for this suite.


• [SLOW TEST:16.349 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":151,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:25.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Jun  3 23:14:25.381: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:25.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-704" for this suite.


S [SKIPPING] [0.031 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:25.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun  3 23:14:25.533: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:25.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-8168" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:25.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-4998
STEP: changing service nodeport-range-test to out-of-range NodePort 57182
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 57182
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:25.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4998" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":3,"skipped":425,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:26.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Jun  3 23:14:26.201: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:08.516610      37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.516: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.518: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-8913
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:08.664: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:08.695: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.698: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.701: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.701: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.697: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:18.699: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:20.702: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:22.701: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:24.700: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:26.698: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:28.699: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:30.700: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:30.709: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:34.757: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:34.757: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:34.764: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:34.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8913" for this suite.


S [SKIPPING] [26.280 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:08.614459      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.614: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.616: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-7301
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:08.742: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:08.772: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.775: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.776: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.777: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.777: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:18.775: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:20.779: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:22.777: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:24.778: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:26.775: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:28.775: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:30.775: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:30.780: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:38.813: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:38.813: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:38.820: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:38.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7301" for this suite.


S [SKIPPING] [30.239 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:08.574937      41 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.575: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.576: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-5629
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:08.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:08.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.728: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.728: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:18.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:20.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:22.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:24.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:26.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:28.728: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:30.729: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:30.735: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:38.814: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:38.814: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:38.822: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:38.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5629" for this suite.


S [SKIPPING] [30.279 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:08.823170      35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.823: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.825: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-2970
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:08.939: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:08.970: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.978: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.973: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:18.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:20.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:22.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:24.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:26.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:28.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:30.974: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:30.978: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:39.001: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:39.001: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:39.008: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2970" for this suite.


S [SKIPPING] [30.214 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:35.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Jun  3 23:14:35.151: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-7210" to be "Succeeded or Failed"
Jun  3 23:14:35.153: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02918ms
Jun  3 23:14:37.156: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004859156s
Jun  3 23:14:39.159: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007765741s
STEP: Saw pod success
Jun  3 23:14:39.159: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7210" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":139,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:09.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:09.239360      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:09.239: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:09.245: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-6024
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:09.356: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:09.388: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:11.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:13.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:15.393: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:17.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:19.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:21.393: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:23.391: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:25.393: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:27.394: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:29.396: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:31.393: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:31.397: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:39.422: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:39.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:39.429: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6024" for this suite.


S [SKIPPING] [30.221 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun  3 23:14:39.517: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5930" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Jun  3 23:14:39.683: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Jun  3 23:14:39.685: INFO: starting watch
STEP: patching
STEP: updating
Jun  3 23:14:39.693: INFO: waiting for watch events with expected annotations
Jun  3 23:14:39.693: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Jun  3 23:14:39.693: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-7812" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":426,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Jun  3 23:14:39.621: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Jun  3 23:14:39.726: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-2212" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.142 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun  3 23:14:39.886: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:39.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-6513" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0603 23:14:08.494719      31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.495: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.496: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-4463
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:08.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:08.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.647: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.650: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.650: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.646: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:18.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:20.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:22.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:24.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:26.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:28.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:30.647: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:30.713: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:14:40.774: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:14:40.774: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:40.781: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:40.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4463" for this suite.


S [SKIPPING] [32.317 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:08.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0603 23:14:08.443597      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:08.443: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:08.447: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-614
STEP: creating a client pod for probing the service svc-udp
Jun  3 23:14:08.473: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:10.583: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:12.481: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:14.479: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:16.477: INFO: The status of Pod pod-client is Running (Ready = true)
Jun  3 23:14:16.495: INFO: Pod client logs: Fri Jun  3 23:14:11 UTC 2022
Fri Jun  3 23:14:11 UTC 2022 Try: 1

Fri Jun  3 23:14:11 UTC 2022 Try: 2

Fri Jun  3 23:14:11 UTC 2022 Try: 3

Fri Jun  3 23:14:11 UTC 2022 Try: 4

Fri Jun  3 23:14:11 UTC 2022 Try: 5

Fri Jun  3 23:14:11 UTC 2022 Try: 6

Fri Jun  3 23:14:11 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Jun  3 23:14:16.509: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:18.513: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:20.514: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-614 to expose endpoints map[pod-server-1:[80]]
Jun  3 23:14:20.525: INFO: successfully validated that service svc-udp in namespace conntrack-614 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Jun  3 23:14:30.559: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:32.565: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:34.563: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:36.562: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:38.563: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Jun  3 23:14:38.566: INFO: Cleaning up pod-server-1 pod
Jun  3 23:14:38.574: INFO: Waiting for pod pod-server-1 to disappear
Jun  3 23:14:38.577: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-614 to expose endpoints map[pod-server-2:[80]]
Jun  3 23:14:38.585: INFO: successfully validated that service svc-udp in namespace conntrack-614 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:49.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-614" for this suite.


• [SLOW TEST:40.738 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:09.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
W0603 23:14:09.066072      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun  3 23:14:09.066: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun  3 23:14:09.069: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Jun  3 23:14:09.082: INFO: deploying iperf2 server
Jun  3 23:14:09.087: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Jun  3 23:14:09.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Jun  3 23:14:11.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:14:13.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:14:15.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:14:17.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894849, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:14:19.103: INFO: waiting for iperf2 server endpoints
Jun  3 23:14:21.108: INFO: found iperf2 server endpoints
Jun  3 23:14:21.108: INFO: waiting for client pods to be running
Jun  3 23:14:23.112: INFO: all client pods are ready: 2 pods
Jun  3 23:14:23.115: INFO: server pod phase Running
Jun  3 23:14:23.115: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 23:14:09 +0000 UTC Reason: Message:}
Jun  3 23:14:23.115: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 23:14:17 +0000 UTC Reason: Message:}
Jun  3 23:14:23.115: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 23:14:17 +0000 UTC Reason: Message:}
Jun  3 23:14:23.115: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-03 23:14:09 +0000 UTC Reason: Message:}
Jun  3 23:14:23.115: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-06-03 23:14:16 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://811bae713cd815b1902bafa58a660b752d975115e95a92379f0e81c32bbe904e Started:0xc0043c1feb}
Jun  3 23:14:23.115: INFO: found 2 matching client pods
Jun  3 23:14:23.119: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-1411 PodName:iperf2-clients-fqsmr ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:23.119: INFO: >>> kubeConfig: /root/.kube/config
Jun  3 23:14:23.208: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun  3 23:14:23.208: INFO: iperf version: 
Jun  3 23:14:23.208: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-fqsmr (node node1)
Jun  3 23:14:23.211: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-1411 PodName:iperf2-clients-fqsmr ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:23.211: INFO: >>> kubeConfig: /root/.kube/config
Jun  3 23:14:38.375: INFO: Exec stderr: ""
Jun  3 23:14:38.375: INFO: output from exec on client pod iperf2-clients-fqsmr (node node1): 
20220603231424.339,10.244.3.156,53802,10.233.27.123,6789,3,0.0-1.0,119668736,957349888
20220603231425.334,10.244.3.156,53802,10.233.27.123,6789,3,1.0-2.0,117964800,943718400
20220603231426.336,10.244.3.156,53802,10.233.27.123,6789,3,2.0-3.0,97910784,783286272
20220603231427.452,10.244.3.156,53802,10.233.27.123,6789,3,3.0-4.0,85458944,683671552
20220603231428.340,10.244.3.156,53802,10.233.27.123,6789,3,4.0-5.0,98041856,784334848
20220603231429.352,10.244.3.156,53802,10.233.27.123,6789,3,5.0-6.0,109969408,879755264
20220603231430.330,10.244.3.156,53802,10.233.27.123,6789,3,6.0-7.0,116654080,933232640
20220603231431.435,10.244.3.156,53802,10.233.27.123,6789,3,7.0-8.0,117309440,938475520
20220603231432.324,10.244.3.156,53802,10.233.27.123,6789,3,8.0-9.0,107610112,860880896
20220603231433.341,10.244.3.156,53802,10.233.27.123,6789,3,9.0-10.0,117178368,937426944
20220603231433.341,10.244.3.156,53802,10.233.27.123,6789,3,0.0-10.0,1087766528,869831888

Jun  3 23:14:38.378: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-1411 PodName:iperf2-clients-xs2x2 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:38.378: INFO: >>> kubeConfig: /root/.kube/config
Jun  3 23:14:38.467: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun  3 23:14:38.467: INFO: iperf version: 
Jun  3 23:14:38.467: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-xs2x2 (node node2)
Jun  3 23:14:38.471: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-1411 PodName:iperf2-clients-xs2x2 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun  3 23:14:38.471: INFO: >>> kubeConfig: /root/.kube/config
Jun  3 23:14:53.605: INFO: Exec stderr: ""
Jun  3 23:14:53.605: INFO: output from exec on client pod iperf2-clients-xs2x2 (node node2): 
20220603231439.570,10.244.4.36,49208,10.233.27.123,6789,3,0.0-1.0,3425435648,27403485184
20220603231440.557,10.244.4.36,49208,10.233.27.123,6789,3,1.0-2.0,3431858176,27454865408
20220603231441.564,10.244.4.36,49208,10.233.27.123,6789,3,2.0-3.0,3486908416,27895267328
20220603231442.571,10.244.4.36,49208,10.233.27.123,6789,3,3.0-4.0,3464101888,27712815104
20220603231443.558,10.244.4.36,49208,10.233.27.123,6789,3,4.0-5.0,3461087232,27688697856
20220603231444.564,10.244.4.36,49208,10.233.27.123,6789,3,5.0-6.0,3355967488,26847739904
20220603231445.570,10.244.4.36,49208,10.233.27.123,6789,3,6.0-7.0,3465150464,27721203712
20220603231446.557,10.244.4.36,49208,10.233.27.123,6789,3,7.0-8.0,3223715840,25789726720
20220603231447.564,10.244.4.36,49208,10.233.27.123,6789,3,8.0-9.0,3512991744,28103933952
20220603231448.571,10.244.4.36,49208,10.233.27.123,6789,3,9.0-10.0,3129475072,25035800576
20220603231448.571,10.244.4.36,49208,10.233.27.123,6789,3,0.0-10.0,33956691968,27165258495

Jun  3 23:14:53.605: INFO:                                From                                 To    Bandwidth (MB/s)
Jun  3 23:14:53.605: INFO:                               node1                              node2                 104
Jun  3 23:14:53.605: INFO:                               node2                              node2                3238
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:53.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-1411" for this suite.


• [SLOW TEST:44.571 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":1,"skipped":255,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:38.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7820.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7820.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7820.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7820.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7820.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun  3 23:14:53.637: INFO: DNS probes using dns-7820/dns-test-4761e90d-865b-46e4-863b-0fab9892bc4b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:14:53.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7820" for this suite.


• [SLOW TEST:14.713 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":86,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:49.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Jun  3 23:14:49.240: INFO: Creating new exec pod
Jun  3 23:15:01.261: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Jun  3 23:15:01.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8677 exec execpod-noendpoints6mwqt -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jun  3 23:15:02.533: INFO: rc: 1
Jun  3 23:15:02.533: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8677 exec execpod-noendpoints6mwqt -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:02.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8677" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.336 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":22,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:25.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-6294
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:25.462: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:25.520: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:27.525: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:29.528: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:31.523: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:33.525: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:35.526: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:37.525: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:39.524: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:41.523: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:43.523: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:45.528: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:47.525: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:47.530: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:03.553: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:03.554: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:03.561: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:03.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6294" for this suite.


S [SKIPPING] [38.247 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-b9f7a909-0ec3-41ad-b7f7-244ccbdea8b1]
STEP: Verifying pods for RC slow-terminating-unready-pod
Jun  3 23:14:39.262: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
Jun  3 23:14:44.265: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Jun  3 23:14:48.291: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-vvvcq]: "NOW: 2022-06-03 23:14:48.290104874 +0000 UTC m=+3.146475137", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-9563.svc.cluster.local
Jun  3 23:14:48.291: INFO: Creating new exec pod
Jun  3 23:14:56.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9563 exec execpod-bt6c8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/'
Jun  3 23:14:56.727: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/\n"
Jun  3 23:14:56.727: INFO: stdout: "NOW: 2022-06-03 23:14:56.715242752 +0000 UTC m=+11.571613016"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-9563 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jun  3 23:15:01.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9563 exec execpod-bt6c8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/; test "$?" -ne "0"'
Jun  3 23:15:02.023: INFO: rc: 1
Jun  3 23:15:02.023: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2022-06-03 23:15:02.01417594 +0000 UTC m=+16.870546253, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9563 exec execpod-bt6c8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2022-06-03 23:15:02.01417594 +0000 UTC m=+16.870546253
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jun  3 23:15:04.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9563 exec execpod-bt6c8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/; test "$?" -ne "0"'
Jun  3 23:15:05.311: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jun  3 23:15:05.311: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jun  3 23:15:05.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9563 exec execpod-bt6c8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/'
Jun  3 23:15:05.867: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9563.svc.cluster.local:80/\n"
Jun  3 23:15:05.867: INFO: stdout: "NOW: 2022-06-03 23:15:05.807630694 +0000 UTC m=+20.664000958"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-9563
STEP: deleting service tolerate-unready in namespace services-9563
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:05.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9563" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:26.678 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":237,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:26.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-8606
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:26.435: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:26.548: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:28.553: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:30.553: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:32.551: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:34.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:36.552: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:38.552: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:40.552: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:42.554: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:44.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:46.552: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:48.553: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:14:48.558: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:06.580: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:06.580: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:06.587: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:06.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8606" for this suite.


S [SKIPPING] [40.287 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-4584
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:39.486: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:39.519: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:41.522: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:43.523: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:45.524: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:47.524: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:49.523: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:51.522: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:53.522: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:55.524: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:57.523: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:59.524: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:01.522: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:03.522: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:03.526: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:15.568: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:15.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:15.575: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:15.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4584" for this suite.


S [SKIPPING] [36.214 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:16.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-4883
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-4883
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:16.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4883" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":736,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:39.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-4729
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:39.880: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:39.913: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:41.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:43.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:45.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:47.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:49.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:51.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:53.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:55.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:57.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:59.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:01.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:03.916: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:03.922: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:05.925: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:17.962: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:17.962: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:17.969: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:17.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4729" for this suite.


S [SKIPPING] [38.207 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:40.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-2815
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:40.329: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:40.797: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:42.805: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:44.805: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:46.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:48.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:50.802: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:52.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:54.802: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:56.801: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:58.801: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:00.803: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:02.800: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:04.801: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:06.801: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:08.802: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:10.802: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:10.807: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:18.833: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:18.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:18.841: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:18.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2815" for this suite.


S [SKIPPING] [38.637 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:41.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-6925
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:41.247: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:41.279: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:43.282: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:45.284: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:47.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:49.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:51.284: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:53.282: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:55.286: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:57.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:14:59.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:01.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:03.282: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:05.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:07.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:09.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:11.283: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:11.288: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:25.329: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:25.329: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:25.336: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:25.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6925" for this suite.


S [SKIPPING] [44.206 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:53.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-8994
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:14:53.866: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:14:53.899: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:55.903: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:57.903: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:14:59.904: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:01.903: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:03.905: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:05.906: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:07.904: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:09.903: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:11.903: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:13.902: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:15.904: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:17.906: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:17.910: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:19.917: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:21.914: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:25.935: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:25.935: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:25.943: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:25.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8994" for this suite.


S [SKIPPING] [32.195 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:02.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-1549
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:15:02.934: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:02.968: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:04.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:06.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:08.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:10.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:12.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:14.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:16.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:18.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:20.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:22.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:24.972: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:24.977: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:26.982: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:28.982: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:33.005: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:33.005: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:33.013: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:33.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1549" for this suite.


S [SKIPPING] [30.229 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:33.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Jun  3 23:15:33.417: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Jun  3 23:15:33.420: INFO: starting watch
STEP: patching
STEP: updating
Jun  3 23:15:33.427: INFO: waiting for watch events with expected annotations
Jun  3 23:15:33.427: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Jun  3 23:15:33.428: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:33.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-7451" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":337,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:33.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun  3 23:15:33.557: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:33.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-3390" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:25.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-8568
Jun  3 23:15:25.391: INFO: hairpin-test cluster ip: 10.233.20.123
STEP: creating a client/server pod
Jun  3 23:15:25.406: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:27.410: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:29.410: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:31.413: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:33.408: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:35.410: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:37.411: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-8568 to expose endpoints map[hairpin:[8080]]
Jun  3 23:15:37.420: INFO: successfully validated that service hairpin-test in namespace services-8568 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Jun  3 23:15:38.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8568 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Jun  3 23:15:38.876: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Jun  3 23:15:38.876: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun  3 23:15:38.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8568 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.20.123 8080'
Jun  3 23:15:39.306: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.20.123 8080\nConnection to 10.233.20.123 8080 port [tcp/http-alt] succeeded!\n"
Jun  3 23:15:39.306: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:39.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8568" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.957 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":182,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:18.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Jun  3 23:15:18.928: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:20.932: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:22.933: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:24.932: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun  3 23:15:24.947: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:26.950: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:28.950: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:30.951: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:32.951: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:34.951: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:36.950: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun  3 23:15:38.972: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:40.976: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:42.977: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Jun  3 23:15:42.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-5967 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Jun  3 23:15:43.271: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 6 flow entries have been shown.\n"
Jun  3 23:15:43.271: INFO: stdout: "tcp      6 3598 CLOSE_WAIT src=10.244.3.184 dst=10.10.190.208 sport=57460 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=7744 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Jun  3 23:15:43.271: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3598 CLOSE_WAIT src=10.244.3.184 dst=10.10.190.208 sport=57460 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=7744 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-5967" for this suite.


• [SLOW TEST:24.407 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":592,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:03.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-9725
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:15:04.054: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:04.084: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:06.088: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:08.088: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:10.091: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:12.089: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:14.089: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:16.093: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:18.088: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:20.091: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:22.089: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:24.090: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:26.090: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:26.094: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:28.097: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:30.099: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:32.099: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:44.136: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:44.136: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:44.143: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:44.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9725" for this suite.


S [SKIPPING] [40.213 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:18.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-426
STEP: creating replication controller externalip-test in namespace services-426
I0603 23:15:18.075329      29 runners.go:190] Created replication controller with name: externalip-test, namespace: services-426, replica count: 2
I0603 23:15:21.125788      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:24.127026      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:27.127472      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun  3 23:15:27.127: INFO: Creating new exec pod
Jun  3 23:15:42.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-426 exec execpodxjcfm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jun  3 23:15:42.401: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:42.401: INFO: stdout: "externalip-test-2mtng"
Jun  3 23:15:42.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-426 exec execpodxjcfm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.61 80'
Jun  3 23:15:42.682: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.45.61 80\nConnection to 10.233.45.61 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:42.682: INFO: stdout: ""
Jun  3 23:15:43.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-426 exec execpodxjcfm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.61 80'
Jun  3 23:15:44.011: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.45.61 80\nConnection to 10.233.45.61 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:44.011: INFO: stdout: "externalip-test-2mtng"
Jun  3 23:15:44.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-426 exec execpodxjcfm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jun  3 23:15:44.294: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:44.295: INFO: stdout: "externalip-test-2mtng"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-426" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:26.264 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":480,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:06.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-3151
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:15:06.396: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:06.428: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:08.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:10.435: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:12.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:14.435: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:16.431: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:18.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:20.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:22.431: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:24.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:26.432: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:26.436: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:28.439: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:30.443: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:32.440: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:44.461: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:44.461: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:44.468: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:44.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3151" for this suite.


S [SKIPPING] [38.189 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:44.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun  3 23:15:44.864: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:44.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-9102" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:45.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-4260
STEP: deleting original service nodeport-reuse
Jun  3 23:15:45.379: INFO: Creating new host exec pod
Jun  3 23:15:45.394: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:47.399: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:49.398: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:51.401: INFO: The status of Pod hostexec is Running (Ready = true)
Jun  3 23:15:51.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4260 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :32269' | tail -n +2 | grep LISTEN'
Jun  3 23:15:51.891: INFO: stderr: "+ ss -ant46 'sport = :32269'\n+ tail -n +2\n+ grep LISTEN\n"
Jun  3 23:15:51.891: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 32269
STEP: deleting service nodeport-reuse in namespace services-4260
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:51.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4260" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:6.582 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":865,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:52.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:52.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8282" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":982,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Jun  3 23:15:52.619: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:16.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-8217
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:15:16.683: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:16.719: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:18.722: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:20.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:22.724: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:24.728: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:26.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:28.724: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:30.724: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:32.724: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:34.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:36.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:38.723: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:38.728: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:40.732: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:42.732: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:44.733: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:15:54.774: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:15:54.774: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:54.781: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:15:54.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8217" for this suite.


S [SKIPPING] [38.239 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun  3 23:15:54.796: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:26.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-2272
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun  3 23:15:26.159: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:15:26.198: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:28.201: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:30.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:32.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:34.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:36.204: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:38.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:40.202: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:42.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:44.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:46.202: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun  3 23:15:48.202: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun  3 23:15:48.207: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun  3 23:15:50.212: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun  3 23:16:00.250: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun  3 23:16:00.250: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun  3 23:16:00.256: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2272" for this suite.


S [SKIPPING] [34.224 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun  3 23:16:00.269: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:43.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Jun  3 23:15:43.492: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:45.495: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:47.496: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Jun  3 23:15:47.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5332 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jun  3 23:15:47.808: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Jun  3 23:15:47.808: INFO: stdout: "iptables"
Jun  3 23:15:47.808: INFO: proxyMode: iptables
Jun  3 23:15:47.815: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jun  3 23:15:47.817: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-5332
Jun  3 23:15:47.824: INFO: sourceip-test cluster ip: 10.233.2.252
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Jun  3 23:15:47.841: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:49.844: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:51.844: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:53.844: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:55.846: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:57.844: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-5332 to expose endpoints map[echo-sourceip:[8080]]
Jun  3 23:15:57.852: INFO: successfully validated that service sourceip-test in namespace services-5332 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Jun  3 23:15:57.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Jun  3 23:15:59.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-d9b4c4b84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:16:01.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-d9b4c4b84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:16:03.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894962, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-d9b4c4b84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:16:05.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894962, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63789894957, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-d9b4c4b84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  3 23:16:07.869: INFO: Waiting up to 2m0s to get response from 10.233.2.252:8080
Jun  3 23:16:07.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5332 exec pause-pod-d9b4c4b84-n79vj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.2.252:8080/clientip'
Jun  3 23:16:08.139: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.2.252:8080/clientip\n"
Jun  3 23:16:08.139: INFO: stdout: "10.244.3.193:58922"
STEP: Verifying the preserved source ip
Jun  3 23:16:08.139: INFO: Waiting up to 2m0s to get response from 10.233.2.252:8080
Jun  3 23:16:08.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5332 exec pause-pod-d9b4c4b84-q9p8v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.2.252:8080/clientip'
Jun  3 23:16:08.389: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.2.252:8080/clientip\n"
Jun  3 23:16:08.389: INFO: stdout: "10.244.4.82:44650"
STEP: Verifying the preserved source ip
Jun  3 23:16:08.389: INFO: Deleting deployment
Jun  3 23:16:08.393: INFO: Cleaning up the echo server pod
Jun  3 23:16:08.399: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:08.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5332" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:24.959 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":3,"skipped":684,"failed":0}
Jun  3 23:16:08.418: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:06.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-3442
STEP: creating service service-headless in namespace services-3442
STEP: creating replication controller service-headless in namespace services-3442
I0603 23:15:06.890980      32 runners.go:190] Created replication controller with name: service-headless, namespace: services-3442, replica count: 3
I0603 23:15:09.942764      32 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:12.943802      32 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:15.944034      32 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:18.944797      32 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:21.945120      32 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-3442
STEP: creating service service-headless-toggled in namespace services-3442
STEP: creating replication controller service-headless-toggled in namespace services-3442
I0603 23:15:21.957242      32 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-3442, replica count: 3
I0603 23:15:25.008881      32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:28.009887      32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:31.012937      32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:34.013272      32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:37.013934      32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun  3 23:15:37.016: INFO: Creating new host exec pod
Jun  3 23:15:37.032: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:39.035: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:41.037: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:43.036: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:15:43.036: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:15:49.056: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done" in pod services-3442/verify-service-up-host-exec-pod
Jun  3 23:15:49.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done'
Jun  3 23:15:49.433: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n"
Jun  3 23:15:49.433: INFO: stdout: "service-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\n"
Jun  3 23:15:49.434: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done" in pod services-3442/verify-service-up-exec-pod-t5kt7
Jun  3 23:15:49.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-up-exec-pod-t5kt7 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done'
Jun  3 23:15:49.910: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n"
Jun  3 23:15:49.910: INFO: stdout: "service-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3442
STEP: Deleting pod verify-service-up-exec-pod-t5kt7 in namespace services-3442
STEP: verifying service-headless is not up
Jun  3 23:15:49.925: INFO: Creating new host exec pod
Jun  3 23:15:49.937: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:51.940: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:53.940: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:55.942: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:57.940: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:59.942: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:15:59.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed'
Jun  3 23:16:02.285: INFO: rc: 28
Jun  3 23:16:02.285: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed" in pod services-3442/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.1.253:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3442
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Jun  3 23:16:02.301: INFO: Creating new host exec pod
Jun  3 23:16:02.315: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:04.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:06.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:08.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:10.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:12.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:14.320: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:16.321: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:16.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.14:80 && echo service-down-failed'
Jun  3 23:16:18.603: INFO: rc: 28
Jun  3 23:16:18.603: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.28.14:80 && echo service-down-failed" in pod services-3442/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.14:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.28.14:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3442
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Jun  3 23:16:18.619: INFO: Creating new host exec pod
Jun  3 23:16:18.634: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:20.639: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:20.639: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:24.659: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done" in pod services-3442/verify-service-up-host-exec-pod
Jun  3 23:16:24.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done'
Jun  3 23:16:25.015: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n"
Jun  3 23:16:25.015: INFO: stdout: "service-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\n"
Jun  3 23:16:25.015: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done" in pod services-3442/verify-service-up-exec-pod-cqnv6
Jun  3 23:16:25.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-up-exec-pod-cqnv6 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.14:80 2>&1 || true; echo; done'
Jun  3 23:16:25.441: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.14:80\n+ echo\n"
Jun  3 23:16:25.441: INFO: stdout: "service-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-m8cmz\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-m8cmz\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-mjphm\nservice-headless-toggled-mjphm\nservice-headless-toggled-j56xf\nservice-headless-toggled-j56xf\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3442
STEP: Deleting pod verify-service-up-exec-pod-cqnv6 in namespace services-3442
STEP: verifying service-headless is still not up
Jun  3 23:16:25.454: INFO: Creating new host exec pod
Jun  3 23:16:25.469: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:27.473: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:29.475: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:29.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed'
Jun  3 23:16:31.839: INFO: rc: 28
Jun  3 23:16:31.839: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed" in pod services-3442/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3442 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.1.253:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.1.253:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3442
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:31.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3442" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:84.996 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":5,"skipped":855,"failed":0}
Jun  3 23:16:31.860: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:14:53.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3894
STEP: creating service up-down-1 in namespace services-3894
STEP: creating replication controller up-down-1 in namespace services-3894
I0603 23:14:53.724094      27 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3894, replica count: 3
I0603 23:14:56.774916      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:14:59.775669      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:02.775883      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3894
STEP: creating service up-down-2 in namespace services-3894
STEP: creating replication controller up-down-2 in namespace services-3894
I0603 23:15:02.788070      27 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3894, replica count: 3
I0603 23:15:05.839460      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:08.840642      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:11.841499      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:14.847477      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:17.848884      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Jun  3 23:15:17.851: INFO: Creating new host exec pod
Jun  3 23:15:17.867: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:19.870: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:21.872: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:15:21.872: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:15:31.888: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.147:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-host-exec-pod
Jun  3 23:15:31.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.147:80 2>&1 || true; echo; done'
Jun  3 23:15:32.458: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n"
Jun  3 23:15:32.459: INFO: stdout: "up-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\n"
Jun  3 23:15:32.459: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.147:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-exec-pod-6f7hc
Jun  3 23:15:32.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-exec-pod-6f7hc -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.37.147:80 2>&1 || true; echo; done'
Jun  3 23:15:33.060: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.37.147:80\n+ echo\n"
Jun  3 23:15:33.060: INFO: stdout: "up-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-bph74\nup-down-1-xvw9m\nup-down-1-6snpm\nup-down-1-xvw9m\nup-down-1-bph74\nup-down-1-6snpm\nup-down-1-xvw9m\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3894
STEP: Deleting pod verify-service-up-exec-pod-6f7hc in namespace services-3894
STEP: verifying service up-down-2 is up
Jun  3 23:15:33.075: INFO: Creating new host exec pod
Jun  3 23:15:33.090: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:35.095: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:37.093: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:15:37.093: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:15:45.111: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-host-exec-pod
Jun  3 23:15:45.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:15:45.716: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:15:45.717: INFO: stdout: "up-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\n"
Jun  3 23:15:45.717: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-exec-pod-qq4dc
Jun  3 23:15:45.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-exec-pod-qq4dc -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:15:46.133: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:15:46.133: INFO: stdout: "up-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3894
STEP: Deleting pod verify-service-up-exec-pod-qq4dc in namespace services-3894
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3894, will wait for the garbage collector to delete the pods
Jun  3 23:15:46.204: INFO: Deleting ReplicationController up-down-1 took: 3.495285ms
Jun  3 23:15:46.305: INFO: Terminating ReplicationController up-down-1 pods took: 100.896336ms
STEP: verifying service up-down-1 is not up
Jun  3 23:15:56.514: INFO: Creating new host exec pod
Jun  3 23:15:56.526: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:58.531: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:00.531: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:02.531: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:02.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.147:80 && echo service-down-failed'
Jun  3 23:16:04.784: INFO: rc: 28
Jun  3 23:16:04.784: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.37.147:80 && echo service-down-failed" in pod services-3894/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.37.147:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.37.147:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3894
STEP: verifying service up-down-2 is still up
Jun  3 23:16:04.792: INFO: Creating new host exec pod
Jun  3 23:16:04.806: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:06.811: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:06.811: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:10.829: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-host-exec-pod
Jun  3 23:16:10.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:16:11.220: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:16:11.220: INFO: stdout: "up-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\n"
Jun  3 23:16:11.221: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-exec-pod-wftj5
Jun  3 23:16:11.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-exec-pod-wftj5 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:16:11.667: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:16:11.668: INFO: stdout: "up-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3894
STEP: Deleting pod verify-service-up-exec-pod-wftj5 in namespace services-3894
STEP: creating service up-down-3 in namespace services-3894
STEP: creating service up-down-3 in namespace services-3894
STEP: creating replication controller up-down-3 in namespace services-3894
I0603 23:16:11.689145      27 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3894, replica count: 3
I0603 23:16:14.740890      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:16:17.742741      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Jun  3 23:16:17.745: INFO: Creating new host exec pod
Jun  3 23:16:17.759: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:19.764: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:19.764: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:23.782: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-host-exec-pod
Jun  3 23:16:23.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:16:24.154: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:16:24.155: INFO: stdout: "up-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\n"
Jun  3 23:16:24.155: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-exec-pod-7r62f
Jun  3 23:16:24.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-exec-pod-7r62f -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.11.159:80 2>&1 || true; echo; done'
Jun  3 23:16:24.547: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.11.159:80\n+ echo\n"
Jun  3 23:16:24.547: INFO: stdout: "up-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-47dfc\nup-down-2-6sb55\nup-down-2-6sb55\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-47dfc\nup-down-2-crkcn\nup-down-2-crkcn\nup-down-2-6sb55\nup-down-2-6sb55\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3894
STEP: Deleting pod verify-service-up-exec-pod-7r62f in namespace services-3894
STEP: verifying service up-down-3 is up
Jun  3 23:16:24.560: INFO: Creating new host exec pod
Jun  3 23:16:24.576: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:26.580: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:28.580: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:30.582: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:32.582: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:34.581: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:36.579: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:38.579: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:40.581: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:42.580: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:42.580: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:46.601: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.94:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-host-exec-pod
Jun  3 23:16:46.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.94:80 2>&1 || true; echo; done'
Jun  3 23:16:46.972: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n"
Jun  3 23:16:46.972: INFO: stdout: "up-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-npvsx\n"
Jun  3 23:16:46.972: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.94:80 2>&1 || true; echo; done" in pod services-3894/verify-service-up-exec-pod-bv7gv
Jun  3 23:16:46.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3894 exec verify-service-up-exec-pod-bv7gv -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.31.94:80 2>&1 || true; echo; done'
Jun  3 23:16:47.346: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.31.94:80\n+ echo\n"
Jun  3 23:16:47.346: INFO: stdout: "up-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-npvsx\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-zzfwv\nup-down-3-z86dt\nup-down-3-zzfwv\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-npvsx\nup-down-3-z86dt\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3894
STEP: Deleting pod verify-service-up-exec-pod-bv7gv in namespace services-3894
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:47.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3894" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:113.682 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:44.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-2496
STEP: creating service service-proxy-disabled in namespace services-2496
STEP: creating replication controller service-proxy-disabled in namespace services-2496
I0603 23:15:44.348223      29 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-2496, replica count: 3
I0603 23:15:47.399035      29 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:50.401083      29 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:53.402238      29 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:56.404313      29 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-2496
STEP: creating service service-proxy-toggled in namespace services-2496
STEP: creating replication controller service-proxy-toggled in namespace services-2496
I0603 23:15:56.416828      29 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-2496, replica count: 3
I0603 23:15:59.467914      29 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:16:02.468086      29 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:16:05.469052      29 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun  3 23:16:05.471: INFO: Creating new host exec pod
Jun  3 23:16:05.487: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:07.490: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:09.493: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:09.493: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:15.517: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done" in pod services-2496/verify-service-up-host-exec-pod
Jun  3 23:16:15.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done'
Jun  3 23:16:15.889: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n"
Jun  3 23:16:15.889: INFO: stdout: "service-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\n"
Jun  3 23:16:15.889: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done" in pod services-2496/verify-service-up-exec-pod-v5z56
Jun  3 23:16:15.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-up-exec-pod-v5z56 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done'
Jun  3 23:16:16.300: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n"
Jun  3 23:16:16.301: INFO: stdout: "service-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2496
STEP: Deleting pod verify-service-up-exec-pod-v5z56 in namespace services-2496
STEP: verifying service-disabled is not up
Jun  3 23:16:16.314: INFO: Creating new host exec pod
Jun  3 23:16:16.331: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:18.336: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:20.336: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:20.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed'
Jun  3 23:16:22.593: INFO: rc: 28
Jun  3 23:16:22.593: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed" in pod services-2496/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.57.19:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2496
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun  3 23:16:22.609: INFO: Creating new host exec pod
Jun  3 23:16:22.624: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:24.629: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:26.628: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:28.627: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:30.630: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:32.629: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:34.630: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:36.628: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:36.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.56:80 && echo service-down-failed'
Jun  3 23:16:38.885: INFO: rc: 28
Jun  3 23:16:38.885: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.12.56:80 && echo service-down-failed" in pod services-2496/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.56:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.12.56:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2496
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun  3 23:16:38.901: INFO: Creating new host exec pod
Jun  3 23:16:38.924: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:40.928: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:42.928: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun  3 23:16:42.928: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun  3 23:16:46.944: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done" in pod services-2496/verify-service-up-host-exec-pod
Jun  3 23:16:46.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done'
Jun  3 23:16:47.321: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n"
Jun  3 23:16:47.322: INFO: stdout: "service-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\n"
Jun  3 23:16:47.322: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done" in pod services-2496/verify-service-up-exec-pod-5rv4m
Jun  3 23:16:47.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-up-exec-pod-5rv4m -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.12.56:80 2>&1 || true; echo; done'
Jun  3 23:16:47.701: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.12.56:80\n+ echo\n"
Jun  3 23:16:47.702: INFO: stdout: "service-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-ppqmq\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-kfkh4\nservice-proxy-toggled-mjwhs\nservice-proxy-toggled-mjwhs\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2496
STEP: Deleting pod verify-service-up-exec-pod-5rv4m in namespace services-2496
STEP: verifying service-disabled is still not up
Jun  3 23:16:47.715: INFO: Creating new host exec pod
Jun  3 23:16:47.735: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:49.740: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun  3 23:16:49.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed'
Jun  3 23:16:51.988: INFO: rc: 28
Jun  3 23:16:51.988: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed" in pod services-2496/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2496 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.57.19:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.57.19:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2496
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:51.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2496" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:67.692 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":3,"skipped":482,"failed":0}
Jun  3 23:16:52.007: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:39.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Jun  3 23:15:39.697: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:41.701: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:43.702: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:45.701: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:47.702: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Jun  3 23:15:47.722: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:49.726: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:51.726: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:53.726: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:55.728: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Jun  3 23:16:55.968: INFO: boom-server pod logs: 2022/06/03 23:15:44 external ip: 10.244.4.75
2022/06/03 23:15:44 listen on 0.0.0.0:9000
2022/06/03 23:15:44 probing 10.244.4.75
2022/06/03 23:15:55 tcp packet: &{SrcPort:44765 DestPort:9000 Seq:2612728773 Ack:0 Flags:40962 WindowSize:29200 Checksum:38445 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:15:55 tcp packet: &{SrcPort:44765 DestPort:9000 Seq:2612728774 Ack:1369358923 Flags:32784 WindowSize:229 Checksum:26237 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:15:55 connection established
2022/06/03 23:15:55 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 174 221 81 157 59 171 155 187 19 198 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:15:55 checksumer: &{sum:580968 oddByte:33 length:39}
2022/06/03 23:15:55 ret:  581001
2022/06/03 23:15:55 ret:  56721
2022/06/03 23:15:55 ret:  56721
2022/06/03 23:15:55 boom packet injected
2022/06/03 23:15:55 tcp packet: &{SrcPort:44765 DestPort:9000 Seq:2612728774 Ack:1369358923 Flags:32785 WindowSize:229 Checksum:26236 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:15:57 tcp packet: &{SrcPort:45107 DestPort:9000 Seq:174599780 Ack:0 Flags:40962 WindowSize:29200 Checksum:956 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:15:57 tcp packet: &{SrcPort:45107 DestPort:9000 Seq:174599781 Ack:2340007334 Flags:32784 WindowSize:229 Checksum:42757 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:15:57 connection established
2022/06/03 23:15:57 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 176 51 139 120 39 6 10 104 46 101 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:15:57 checksumer: &{sum:439578 oddByte:33 length:39}
2022/06/03 23:15:57 ret:  439611
2022/06/03 23:15:57 ret:  46401
2022/06/03 23:15:57 ret:  46401
2022/06/03 23:15:57 boom packet injected
2022/06/03 23:15:57 tcp packet: &{SrcPort:45107 DestPort:9000 Seq:174599781 Ack:2340007334 Flags:32785 WindowSize:229 Checksum:42756 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:15:59 tcp packet: &{SrcPort:35695 DestPort:9000 Seq:2866705820 Ack:0 Flags:40962 WindowSize:29200 Checksum:14592 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:15:59 tcp packet: &{SrcPort:35695 DestPort:9000 Seq:2866705821 Ack:3622390552 Flags:32784 WindowSize:229 Checksum:60056 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:15:59 connection established
2022/06/03 23:15:59 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 139 111 215 231 196 120 170 222 117 157 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:15:59 checksumer: &{sum:557509 oddByte:33 length:39}
2022/06/03 23:15:59 ret:  557542
2022/06/03 23:15:59 ret:  33262
2022/06/03 23:15:59 ret:  33262
2022/06/03 23:15:59 boom packet injected
2022/06/03 23:15:59 tcp packet: &{SrcPort:35695 DestPort:9000 Seq:2866705821 Ack:3622390552 Flags:32785 WindowSize:229 Checksum:60055 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:01 tcp packet: &{SrcPort:43638 DestPort:9000 Seq:2827493835 Ack:0 Flags:40962 WindowSize:29200 Checksum:26703 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:01 tcp packet: &{SrcPort:43638 DestPort:9000 Seq:2827493836 Ack:4180897700 Flags:32784 WindowSize:229 Checksum:52288 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:01 connection established
2022/06/03 23:16:01 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 170 118 249 49 233 4 168 136 33 204 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:01 checksumer: &{sum:473045 oddByte:33 length:39}
2022/06/03 23:16:01 ret:  473078
2022/06/03 23:16:01 ret:  14333
2022/06/03 23:16:01 ret:  14333
2022/06/03 23:16:01 boom packet injected
2022/06/03 23:16:01 tcp packet: &{SrcPort:43638 DestPort:9000 Seq:2827493836 Ack:4180897700 Flags:32785 WindowSize:229 Checksum:52287 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:03 tcp packet: &{SrcPort:36845 DestPort:9000 Seq:520784499 Ack:0 Flags:40962 WindowSize:29200 Checksum:39902 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:03 tcp packet: &{SrcPort:36845 DestPort:9000 Seq:520784500 Ack:2182499169 Flags:32784 WindowSize:229 Checksum:37727 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:03 connection established
2022/06/03 23:16:03 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 143 237 130 20 196 193 31 10 138 116 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:03 checksumer: &{sum:489470 oddByte:33 length:39}
2022/06/03 23:16:03 ret:  489503
2022/06/03 23:16:03 ret:  30758
2022/06/03 23:16:03 ret:  30758
2022/06/03 23:16:03 boom packet injected
2022/06/03 23:16:03 tcp packet: &{SrcPort:36845 DestPort:9000 Seq:520784500 Ack:2182499169 Flags:32785 WindowSize:229 Checksum:37725 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:05 tcp packet: &{SrcPort:44765 DestPort:9000 Seq:2612728775 Ack:1369358924 Flags:32784 WindowSize:229 Checksum:6235 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:05 tcp packet: &{SrcPort:39994 DestPort:9000 Seq:2868908951 Ack:0 Flags:40962 WindowSize:29200 Checksum:29349 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:05 tcp packet: &{SrcPort:39994 DestPort:9000 Seq:2868908952 Ack:1892838413 Flags:32784 WindowSize:229 Checksum:21229 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:05 connection established
2022/06/03 23:16:05 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 156 58 112 208 229 109 171 0 19 152 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:05 checksumer: &{sum:476975 oddByte:33 length:39}
2022/06/03 23:16:05 ret:  477008
2022/06/03 23:16:05 ret:  18263
2022/06/03 23:16:05 ret:  18263
2022/06/03 23:16:05 boom packet injected
2022/06/03 23:16:05 tcp packet: &{SrcPort:39994 DestPort:9000 Seq:2868908952 Ack:1892838413 Flags:32785 WindowSize:229 Checksum:21228 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:07 tcp packet: &{SrcPort:45107 DestPort:9000 Seq:174599782 Ack:2340007335 Flags:32784 WindowSize:229 Checksum:22755 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:07 tcp packet: &{SrcPort:32981 DestPort:9000 Seq:3074505631 Ack:0 Flags:40962 WindowSize:29200 Checksum:20976 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:07 tcp packet: &{SrcPort:32981 DestPort:9000 Seq:3074505632 Ack:3810414766 Flags:32784 WindowSize:229 Checksum:54137 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:07 connection established
2022/06/03 23:16:07 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 128 213 227 28 202 14 183 65 59 160 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:07 checksumer: &{sum:465055 oddByte:33 length:39}
2022/06/03 23:16:07 ret:  465088
2022/06/03 23:16:07 ret:  6343
2022/06/03 23:16:07 ret:  6343
2022/06/03 23:16:07 boom packet injected
2022/06/03 23:16:07 tcp packet: &{SrcPort:32981 DestPort:9000 Seq:3074505632 Ack:3810414766 Flags:32785 WindowSize:229 Checksum:54136 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:09 tcp packet: &{SrcPort:35695 DestPort:9000 Seq:2866705822 Ack:3622390553 Flags:32784 WindowSize:229 Checksum:40052 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:09 tcp packet: &{SrcPort:35372 DestPort:9000 Seq:2658278854 Ack:0 Flags:40962 WindowSize:29200 Checksum:30577 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:09 tcp packet: &{SrcPort:35372 DestPort:9000 Seq:2658278855 Ack:1036052366 Flags:32784 WindowSize:229 Checksum:65448 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:09 connection established
2022/06/03 23:16:09 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 138 44 61 191 96 238 158 114 29 199 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:09 checksumer: &{sum:543074 oddByte:33 length:39}
2022/06/03 23:16:09 ret:  543107
2022/06/03 23:16:09 ret:  18827
2022/06/03 23:16:09 ret:  18827
2022/06/03 23:16:09 boom packet injected
2022/06/03 23:16:09 tcp packet: &{SrcPort:35372 DestPort:9000 Seq:2658278855 Ack:1036052366 Flags:32785 WindowSize:229 Checksum:65447 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:11 tcp packet: &{SrcPort:43638 DestPort:9000 Seq:2827493837 Ack:4180897701 Flags:32784 WindowSize:229 Checksum:32284 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:11 tcp packet: &{SrcPort:42071 DestPort:9000 Seq:1055311320 Ack:0 Flags:40962 WindowSize:29200 Checksum:3311 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:11 tcp packet: &{SrcPort:42071 DestPort:9000 Seq:1055311321 Ack:797592643 Flags:32784 WindowSize:229 Checksum:14039 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:11 connection established
2022/06/03 23:16:11 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 164 87 47 136 197 163 62 230 197 217 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:11 checksumer: &{sum:555291 oddByte:33 length:39}
2022/06/03 23:16:11 ret:  555324
2022/06/03 23:16:11 ret:  31044
2022/06/03 23:16:11 ret:  31044
2022/06/03 23:16:11 boom packet injected
2022/06/03 23:16:11 tcp packet: &{SrcPort:42071 DestPort:9000 Seq:1055311321 Ack:797592643 Flags:32785 WindowSize:229 Checksum:14038 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:13 tcp packet: &{SrcPort:36845 DestPort:9000 Seq:520784501 Ack:2182499170 Flags:32784 WindowSize:229 Checksum:17724 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:13 tcp packet: &{SrcPort:34872 DestPort:9000 Seq:3719163947 Ack:0 Flags:40962 WindowSize:29200 Checksum:22562 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:13 tcp packet: &{SrcPort:34872 DestPort:9000 Seq:3719163948 Ack:1084758289 Flags:32784 WindowSize:229 Checksum:40013 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:13 connection established
2022/06/03 23:16:13 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 136 56 64 166 146 113 221 173 240 44 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:13 checksumer: &{sum:483495 oddByte:33 length:39}
2022/06/03 23:16:13 ret:  483528
2022/06/03 23:16:13 ret:  24783
2022/06/03 23:16:13 ret:  24783
2022/06/03 23:16:13 boom packet injected
2022/06/03 23:16:13 tcp packet: &{SrcPort:34872 DestPort:9000 Seq:3719163948 Ack:1084758289 Flags:32785 WindowSize:229 Checksum:40012 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:15 tcp packet: &{SrcPort:39994 DestPort:9000 Seq:2868908953 Ack:1892838414 Flags:32784 WindowSize:229 Checksum:1225 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:15 tcp packet: &{SrcPort:38216 DestPort:9000 Seq:1750046845 Ack:0 Flags:40962 WindowSize:29200 Checksum:4175 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:15 tcp packet: &{SrcPort:38216 DestPort:9000 Seq:1750046846 Ack:3135198214 Flags:32784 WindowSize:229 Checksum:38781 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:15 connection established
2022/06/03 23:16:15 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 149 72 186 221 205 102 104 79 152 126 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:15 checksumer: &{sum:495772 oddByte:33 length:39}
2022/06/03 23:16:15 ret:  495805
2022/06/03 23:16:15 ret:  37060
2022/06/03 23:16:15 ret:  37060
2022/06/03 23:16:15 boom packet injected
2022/06/03 23:16:15 tcp packet: &{SrcPort:38216 DestPort:9000 Seq:1750046846 Ack:3135198214 Flags:32785 WindowSize:229 Checksum:38780 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:17 tcp packet: &{SrcPort:32981 DestPort:9000 Seq:3074505633 Ack:3810414767 Flags:32784 WindowSize:229 Checksum:34135 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:17 tcp packet: &{SrcPort:36190 DestPort:9000 Seq:3355666549 Ack:0 Flags:40962 WindowSize:29200 Checksum:57531 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:17 tcp packet: &{SrcPort:36190 DestPort:9000 Seq:3355666550 Ack:770332520 Flags:32784 WindowSize:229 Checksum:59820 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:17 connection established
2022/06/03 23:16:17 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 141 94 45 232 208 200 200 3 104 118 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:17 checksumer: &{sum:507706 oddByte:33 length:39}
2022/06/03 23:16:17 ret:  507739
2022/06/03 23:16:17 ret:  48994
2022/06/03 23:16:17 ret:  48994
2022/06/03 23:16:17 boom packet injected
2022/06/03 23:16:17 tcp packet: &{SrcPort:36190 DestPort:9000 Seq:3355666550 Ack:770332520 Flags:32785 WindowSize:229 Checksum:59819 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:19 tcp packet: &{SrcPort:35372 DestPort:9000 Seq:2658278856 Ack:1036052367 Flags:32784 WindowSize:229 Checksum:45444 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:19 tcp packet: &{SrcPort:42144 DestPort:9000 Seq:2005009325 Ack:0 Flags:40962 WindowSize:29200 Checksum:30449 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:19 tcp packet: &{SrcPort:42144 DestPort:9000 Seq:2005009326 Ack:2854253913 Flags:32784 WindowSize:229 Checksum:56808 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:19 connection established
2022/06/03 23:16:19 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 164 160 170 30 238 185 119 130 3 174 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:19 checksumer: &{sum:515894 oddByte:33 length:39}
2022/06/03 23:16:19 ret:  515927
2022/06/03 23:16:19 ret:  57182
2022/06/03 23:16:19 ret:  57182
2022/06/03 23:16:19 boom packet injected
2022/06/03 23:16:19 tcp packet: &{SrcPort:42144 DestPort:9000 Seq:2005009326 Ack:2854253913 Flags:32785 WindowSize:229 Checksum:56807 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:21 tcp packet: &{SrcPort:42071 DestPort:9000 Seq:1055311322 Ack:797592644 Flags:32784 WindowSize:229 Checksum:59570 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:21 tcp packet: &{SrcPort:44729 DestPort:9000 Seq:96157702 Ack:0 Flags:40962 WindowSize:29200 Checksum:39543 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:21 tcp packet: &{SrcPort:44729 DestPort:9000 Seq:96157703 Ack:162429643 Flags:32784 WindowSize:229 Checksum:38046 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:21 connection established
2022/06/03 23:16:21 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 174 185 9 172 244 43 5 187 64 7 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:21 checksumer: &{sum:493936 oddByte:33 length:39}
2022/06/03 23:16:21 ret:  493969
2022/06/03 23:16:21 ret:  35224
2022/06/03 23:16:21 ret:  35224
2022/06/03 23:16:21 boom packet injected
2022/06/03 23:16:21 tcp packet: &{SrcPort:44729 DestPort:9000 Seq:96157703 Ack:162429643 Flags:32785 WindowSize:229 Checksum:38045 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:23 tcp packet: &{SrcPort:34872 DestPort:9000 Seq:3719163949 Ack:1084758290 Flags:32784 WindowSize:229 Checksum:20009 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:23 tcp packet: &{SrcPort:35696 DestPort:9000 Seq:382898708 Ack:0 Flags:40962 WindowSize:29200 Checksum:21193 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:23 tcp packet: &{SrcPort:35696 DestPort:9000 Seq:382898709 Ack:2574260924 Flags:32784 WindowSize:229 Checksum:5485 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:23 connection established
2022/06/03 23:16:23 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 139 112 153 110 148 28 22 210 146 21 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:23 checksumer: &{sum:465120 oddByte:33 length:39}
2022/06/03 23:16:23 ret:  465153
2022/06/03 23:16:23 ret:  6408
2022/06/03 23:16:23 ret:  6408
2022/06/03 23:16:23 boom packet injected
2022/06/03 23:16:23 tcp packet: &{SrcPort:35696 DestPort:9000 Seq:382898709 Ack:2574260924 Flags:32785 WindowSize:229 Checksum:5484 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:25 tcp packet: &{SrcPort:38216 DestPort:9000 Seq:1750046847 Ack:3135198215 Flags:32784 WindowSize:229 Checksum:18777 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:25 tcp packet: &{SrcPort:32982 DestPort:9000 Seq:1858664574 Ack:0 Flags:40962 WindowSize:29200 Checksum:38705 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:25 tcp packet: &{SrcPort:32982 DestPort:9000 Seq:1858664575 Ack:2316225561 Flags:32784 WindowSize:229 Checksum:45064 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:25 connection established
2022/06/03 23:16:25 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 128 214 138 13 69 121 110 200 248 127 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:25 checksumer: &{sum:514869 oddByte:33 length:39}
2022/06/03 23:16:25 ret:  514902
2022/06/03 23:16:25 ret:  56157
2022/06/03 23:16:25 ret:  56157
2022/06/03 23:16:25 boom packet injected
2022/06/03 23:16:25 tcp packet: &{SrcPort:32982 DestPort:9000 Seq:1858664575 Ack:2316225561 Flags:32785 WindowSize:229 Checksum:45063 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:27 tcp packet: &{SrcPort:36190 DestPort:9000 Seq:3355666551 Ack:770332521 Flags:32784 WindowSize:229 Checksum:39816 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:27 tcp packet: &{SrcPort:44464 DestPort:9000 Seq:367975815 Ack:0 Flags:40962 WindowSize:29200 Checksum:54872 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:27 tcp packet: &{SrcPort:44464 DestPort:9000 Seq:367975816 Ack:3659404293 Flags:32784 WindowSize:229 Checksum:20324 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:27 connection established
2022/06/03 23:16:27 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 173 176 218 28 141 101 21 238 221 136 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:27 checksumer: &{sum:515974 oddByte:33 length:39}
2022/06/03 23:16:27 ret:  516007
2022/06/03 23:16:27 ret:  57262
2022/06/03 23:16:27 ret:  57262
2022/06/03 23:16:27 boom packet injected
2022/06/03 23:16:27 tcp packet: &{SrcPort:44464 DestPort:9000 Seq:367975816 Ack:3659404293 Flags:32785 WindowSize:229 Checksum:20322 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:29 tcp packet: &{SrcPort:42144 DestPort:9000 Seq:2005009327 Ack:2854253914 Flags:32784 WindowSize:229 Checksum:36804 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:29 tcp packet: &{SrcPort:35220 DestPort:9000 Seq:1273310534 Ack:0 Flags:40962 WindowSize:29200 Checksum:27886 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:29 tcp packet: &{SrcPort:35220 DestPort:9000 Seq:1273310535 Ack:1458113228 Flags:32784 WindowSize:229 Checksum:27285 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:29 connection established
2022/06/03 23:16:29 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 137 148 86 231 132 44 75 229 45 71 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:29 checksumer: &{sum:526939 oddByte:33 length:39}
2022/06/03 23:16:29 ret:  526972
2022/06/03 23:16:29 ret:  2692
2022/06/03 23:16:29 ret:  2692
2022/06/03 23:16:29 boom packet injected
2022/06/03 23:16:29 tcp packet: &{SrcPort:35220 DestPort:9000 Seq:1273310535 Ack:1458113228 Flags:32785 WindowSize:229 Checksum:27284 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:31 tcp packet: &{SrcPort:44729 DestPort:9000 Seq:96157704 Ack:162429644 Flags:32784 WindowSize:229 Checksum:18044 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:31 tcp packet: &{SrcPort:34166 DestPort:9000 Seq:3164075321 Ack:0 Flags:40962 WindowSize:29200 Checksum:12438 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:31 tcp packet: &{SrcPort:34166 DestPort:9000 Seq:3164075322 Ack:3971883834 Flags:32784 WindowSize:229 Checksum:30763 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:31 connection established
2022/06/03 23:16:31 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 133 118 236 188 156 154 188 151 245 58 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:31 checksumer: &{sum:513598 oddByte:33 length:39}
2022/06/03 23:16:31 ret:  513631
2022/06/03 23:16:31 ret:  54886
2022/06/03 23:16:31 ret:  54886
2022/06/03 23:16:31 boom packet injected
2022/06/03 23:16:31 tcp packet: &{SrcPort:34166 DestPort:9000 Seq:3164075322 Ack:3971883834 Flags:32785 WindowSize:229 Checksum:30762 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:33 tcp packet: &{SrcPort:35696 DestPort:9000 Seq:382898710 Ack:2574260925 Flags:32784 WindowSize:229 Checksum:51016 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:33 tcp packet: &{SrcPort:46006 DestPort:9000 Seq:2982982161 Ack:0 Flags:40962 WindowSize:29200 Checksum:18551 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:33 tcp packet: &{SrcPort:46006 DestPort:9000 Seq:2982982162 Ack:2827649589 Flags:32784 WindowSize:229 Checksum:28019 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:33 connection established
2022/06/03 23:16:33 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 179 182 168 136 251 149 177 204 178 18 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:33 checksumer: &{sum:518713 oddByte:33 length:39}
2022/06/03 23:16:33 ret:  518746
2022/06/03 23:16:33 ret:  60001
2022/06/03 23:16:33 ret:  60001
2022/06/03 23:16:33 boom packet injected
2022/06/03 23:16:33 tcp packet: &{SrcPort:46006 DestPort:9000 Seq:2982982162 Ack:2827649589 Flags:32785 WindowSize:229 Checksum:28018 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:35 tcp packet: &{SrcPort:32982 DestPort:9000 Seq:1858664576 Ack:2316225562 Flags:32784 WindowSize:229 Checksum:25060 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:35 tcp packet: &{SrcPort:37753 DestPort:9000 Seq:3988093933 Ack:0 Flags:40962 WindowSize:29200 Checksum:23328 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:35 tcp packet: &{SrcPort:37753 DestPort:9000 Seq:3988093934 Ack:2229402435 Flags:32784 WindowSize:229 Checksum:8934 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:35 connection established
2022/06/03 23:16:35 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 147 121 132 224 116 163 237 181 123 238 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:35 checksumer: &{sum:579443 oddByte:33 length:39}
2022/06/03 23:16:35 ret:  579476
2022/06/03 23:16:35 ret:  55196
2022/06/03 23:16:35 ret:  55196
2022/06/03 23:16:35 boom packet injected
2022/06/03 23:16:35 tcp packet: &{SrcPort:37753 DestPort:9000 Seq:3988093934 Ack:2229402435 Flags:32785 WindowSize:229 Checksum:8933 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:37 tcp packet: &{SrcPort:44464 DestPort:9000 Seq:367975817 Ack:3659404294 Flags:32784 WindowSize:229 Checksum:320 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:37 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:1783907058 Ack:0 Flags:40962 WindowSize:29200 Checksum:3555 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:37 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:1783907059 Ack:1601507867 Flags:32784 WindowSize:229 Checksum:56429 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:37 connection established
2022/06/03 23:16:37 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 149 65 95 115 139 123 106 84 66 243 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:37 checksumer: &{sum:503211 oddByte:33 length:39}
2022/06/03 23:16:37 ret:  503244
2022/06/03 23:16:37 ret:  44499
2022/06/03 23:16:37 ret:  44499
2022/06/03 23:16:37 boom packet injected
2022/06/03 23:16:37 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:1783907059 Ack:1601507867 Flags:32785 WindowSize:229 Checksum:56428 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:39 tcp packet: &{SrcPort:35220 DestPort:9000 Seq:1273310536 Ack:1458113229 Flags:32784 WindowSize:229 Checksum:7283 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:39 tcp packet: &{SrcPort:43720 DestPort:9000 Seq:28222466 Ack:0 Flags:40962 WindowSize:29200 Checksum:63520 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:39 tcp packet: &{SrcPort:43720 DestPort:9000 Seq:28222467 Ack:235528252 Flags:32784 WindowSize:229 Checksum:16933 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:39 connection established
2022/06/03 23:16:39 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 170 200 14 8 89 156 1 174 164 3 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:39 checksumer: &{sum:480310 oddByte:33 length:39}
2022/06/03 23:16:39 ret:  480343
2022/06/03 23:16:39 ret:  21598
2022/06/03 23:16:39 ret:  21598
2022/06/03 23:16:39 boom packet injected
2022/06/03 23:16:39 tcp packet: &{SrcPort:43720 DestPort:9000 Seq:28222467 Ack:235528252 Flags:32785 WindowSize:229 Checksum:16932 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:41 tcp packet: &{SrcPort:34166 DestPort:9000 Seq:3164075323 Ack:3971883835 Flags:32784 WindowSize:229 Checksum:10759 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:41 tcp packet: &{SrcPort:44301 DestPort:9000 Seq:3225623688 Ack:0 Flags:40962 WindowSize:29200 Checksum:46831 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:41 tcp packet: &{SrcPort:44301 DestPort:9000 Seq:3225623689 Ack:1093750308 Flags:32784 WindowSize:229 Checksum:22547 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:41 connection established
2022/06/03 23:16:41 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 173 13 65 47 199 132 192 67 28 137 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:41 checksumer: &{sum:443409 oddByte:33 length:39}
2022/06/03 23:16:41 ret:  443442
2022/06/03 23:16:41 ret:  50232
2022/06/03 23:16:41 ret:  50232
2022/06/03 23:16:41 boom packet injected
2022/06/03 23:16:41 tcp packet: &{SrcPort:44301 DestPort:9000 Seq:3225623689 Ack:1093750308 Flags:32785 WindowSize:229 Checksum:22546 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:43 tcp packet: &{SrcPort:46006 DestPort:9000 Seq:2982982163 Ack:2827649590 Flags:32784 WindowSize:229 Checksum:8017 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:43 tcp packet: &{SrcPort:36320 DestPort:9000 Seq:2942275017 Ack:0 Flags:40962 WindowSize:29200 Checksum:28143 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:43 tcp packet: &{SrcPort:36320 DestPort:9000 Seq:2942275018 Ack:2156719627 Flags:32784 WindowSize:229 Checksum:10240 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:43 connection established
2022/06/03 23:16:43 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 141 224 128 139 103 107 175 95 141 202 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:43 checksumer: &{sum:538416 oddByte:33 length:39}
2022/06/03 23:16:43 ret:  538449
2022/06/03 23:16:43 ret:  14169
2022/06/03 23:16:43 ret:  14169
2022/06/03 23:16:43 boom packet injected
2022/06/03 23:16:43 tcp packet: &{SrcPort:36320 DestPort:9000 Seq:2942275018 Ack:2156719627 Flags:32785 WindowSize:229 Checksum:10239 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:45 tcp packet: &{SrcPort:37753 DestPort:9000 Seq:3988093935 Ack:2229402436 Flags:32784 WindowSize:229 Checksum:54467 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:45 tcp packet: &{SrcPort:46017 DestPort:9000 Seq:3098849497 Ack:0 Flags:40962 WindowSize:29200 Checksum:5080 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:45 tcp packet: &{SrcPort:46017 DestPort:9000 Seq:3098849498 Ack:1979693229 Flags:32784 WindowSize:229 Checksum:1538 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:45 connection established
2022/06/03 23:16:45 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 179 193 117 254 50 13 184 180 176 218 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:45 checksumer: &{sum:561730 oddByte:33 length:39}
2022/06/03 23:16:45 ret:  561763
2022/06/03 23:16:45 ret:  37483
2022/06/03 23:16:45 ret:  37483
2022/06/03 23:16:45 boom packet injected
2022/06/03 23:16:45 tcp packet: &{SrcPort:46017 DestPort:9000 Seq:3098849498 Ack:1979693229 Flags:32785 WindowSize:229 Checksum:1537 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:47 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:1783907060 Ack:1601507868 Flags:32784 WindowSize:229 Checksum:36427 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:47 tcp packet: &{SrcPort:46319 DestPort:9000 Seq:1293821600 Ack:0 Flags:40962 WindowSize:29200 Checksum:169 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:47 tcp packet: &{SrcPort:46319 DestPort:9000 Seq:1293821601 Ack:26362145 Flags:32784 WindowSize:229 Checksum:55036 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:47 connection established
2022/06/03 23:16:47 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 180 239 1 144 186 129 77 30 38 161 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:47 checksumer: &{sum:521826 oddByte:33 length:39}
2022/06/03 23:16:47 ret:  521859
2022/06/03 23:16:47 ret:  63114
2022/06/03 23:16:47 ret:  63114
2022/06/03 23:16:47 boom packet injected
2022/06/03 23:16:47 tcp packet: &{SrcPort:46319 DestPort:9000 Seq:1293821601 Ack:26362145 Flags:32785 WindowSize:229 Checksum:55035 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:49 tcp packet: &{SrcPort:43720 DestPort:9000 Seq:28222468 Ack:235528253 Flags:32784 WindowSize:229 Checksum:62466 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:49 tcp packet: &{SrcPort:37936 DestPort:9000 Seq:545666693 Ack:0 Flags:40962 WindowSize:29200 Checksum:13898 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:49 tcp packet: &{SrcPort:37936 DestPort:9000 Seq:545666694 Ack:3522575856 Flags:32784 WindowSize:229 Checksum:14233 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:49 connection established
2022/06/03 23:16:49 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 148 48 209 244 183 80 32 134 54 134 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:49 checksumer: &{sum:505842 oddByte:33 length:39}
2022/06/03 23:16:49 ret:  505875
2022/06/03 23:16:49 ret:  47130
2022/06/03 23:16:49 ret:  47130
2022/06/03 23:16:49 boom packet injected
2022/06/03 23:16:49 tcp packet: &{SrcPort:37936 DestPort:9000 Seq:545666694 Ack:3522575856 Flags:32785 WindowSize:229 Checksum:14232 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:51 tcp packet: &{SrcPort:44301 DestPort:9000 Seq:3225623690 Ack:1093750309 Flags:32784 WindowSize:229 Checksum:2543 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:51 tcp packet: &{SrcPort:39357 DestPort:9000 Seq:1180765628 Ack:0 Flags:40962 WindowSize:29200 Checksum:11226 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:51 tcp packet: &{SrcPort:39357 DestPort:9000 Seq:1180765629 Ack:530799264 Flags:32784 WindowSize:229 Checksum:47867 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:51 connection established
2022/06/03 23:16:51 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 153 189 31 161 212 0 70 97 13 189 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:51 checksumer: &{sum:504671 oddByte:33 length:39}
2022/06/03 23:16:51 ret:  504704
2022/06/03 23:16:51 ret:  45959
2022/06/03 23:16:51 ret:  45959
2022/06/03 23:16:51 boom packet injected
2022/06/03 23:16:51 tcp packet: &{SrcPort:39357 DestPort:9000 Seq:1180765629 Ack:530799264 Flags:32785 WindowSize:229 Checksum:47866 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:53 tcp packet: &{SrcPort:36320 DestPort:9000 Seq:2942275019 Ack:2156719628 Flags:32784 WindowSize:229 Checksum:55771 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:53 tcp packet: &{SrcPort:43137 DestPort:9000 Seq:2435000555 Ack:0 Flags:40962 WindowSize:29200 Checksum:44882 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:53 tcp packet: &{SrcPort:43137 DestPort:9000 Seq:2435000556 Ack:3481088615 Flags:32784 WindowSize:229 Checksum:44800 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:53 connection established
2022/06/03 23:16:53 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 168 129 207 123 171 199 145 35 40 236 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:53 checksumer: &{sum:526939 oddByte:33 length:39}
2022/06/03 23:16:53 ret:  526972
2022/06/03 23:16:53 ret:  2692
2022/06/03 23:16:53 ret:  2692
2022/06/03 23:16:53 boom packet injected
2022/06/03 23:16:53 tcp packet: &{SrcPort:43137 DestPort:9000 Seq:2435000556 Ack:3481088615 Flags:32785 WindowSize:229 Checksum:44799 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:55 tcp packet: &{SrcPort:46017 DestPort:9000 Seq:3098849499 Ack:1979693230 Flags:32784 WindowSize:229 Checksum:47070 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:55 tcp packet: &{SrcPort:44747 DestPort:9000 Seq:3970935758 Ack:0 Flags:40962 WindowSize:29200 Checksum:49864 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.190
2022/06/03 23:16:55 tcp packet: &{SrcPort:44747 DestPort:9000 Seq:3970935759 Ack:1289837659 Flags:32784 WindowSize:229 Checksum:4941 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.190
2022/06/03 23:16:55 connection established
2022/06/03 23:16:55 calling checksumTCP: 10.244.4.75 10.244.3.190 [35 40 174 203 76 223 213 187 236 175 171 207 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/03 23:16:55 checksumer: &{sum:596966 oddByte:33 length:39}
2022/06/03 23:16:55 ret:  596999
2022/06/03 23:16:55 ret:  7184
2022/06/03 23:16:55 ret:  7184
2022/06/03 23:16:55 boom packet injected
2022/06/03 23:16:55 tcp packet: &{SrcPort:44747 DestPort:9000 Seq:3970935759 Ack:1289837659 Flags:32785 WindowSize:229 Checksum:4940 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.190

Jun  3 23:16:55.968: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun  3 23:16:55.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7152" for this suite.


• [SLOW TEST:76.323 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":359,"failed":0}
Jun  3 23:16:55.979: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:44.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-8013
STEP: creating a client pod for probing the service svc-udp
Jun  3 23:15:44.745: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:46.749: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:48.748: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:50.750: INFO: The status of Pod pod-client is Running (Ready = true)
Jun  3 23:15:50.760: INFO: Pod client logs: Fri Jun  3 23:15:49 UTC 2022
Fri Jun  3 23:15:49 UTC 2022 Try: 1

Fri Jun  3 23:15:49 UTC 2022 Try: 2

Fri Jun  3 23:15:49 UTC 2022 Try: 3

Fri Jun  3 23:15:49 UTC 2022 Try: 4

Fri Jun  3 23:15:49 UTC 2022 Try: 5

Fri Jun  3 23:15:49 UTC 2022 Try: 6

Fri Jun  3 23:15:49 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Jun  3 23:15:50.771: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:52.775: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:54.775: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:56.775: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:15:58.774: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun  3 23:16:00.776: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8013 to expose endpoints map[pod-server-1:[80]]
Jun  3 23:16:00.788: INFO: successfully validated that service svc-udp in namespace conntrack-8013 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Jun  3 23:17:01.395: INFO: Pod client logs: Fri Jun  3 23:15:49 UTC 2022
Fri Jun  3 23:15:49 UTC 2022 Try: 1

Fri Jun  3 23:15:49 UTC 2022 Try: 2

Fri Jun  3 23:15:49 UTC 2022 Try: 3

Fri Jun  3 23:15:49 UTC 2022 Try: 4

Fri Jun  3 23:15:49 UTC 2022 Try: 5

Fri Jun  3 23:15:49 UTC 2022 Try: 6

Fri Jun  3 23:15:49 UTC 2022 Try: 7

Fri Jun  3 23:15:54 UTC 2022 Try: 8

Fri Jun  3 23:15:54 UTC 2022 Try: 9

Fri Jun  3 23:15:54 UTC 2022 Try: 10

Fri Jun  3 23:15:54 UTC 2022 Try: 11

Fri Jun  3 23:15:54 UTC 2022 Try: 12

Fri Jun  3 23:15:54 UTC 2022 Try: 13

Fri Jun  3 23:15:59 UTC 2022 Try: 14

Fri Jun  3 23:15:59 UTC 2022 Try: 15

Fri Jun  3 23:15:59 UTC 2022 Try: 16

Fri Jun  3 23:15:59 UTC 2022 Try: 17

Fri Jun  3 23:15:59 UTC 2022 Try: 18

Fri Jun  3 23:15:59 UTC 2022 Try: 19

Fri Jun  3 23:16:04 UTC 2022 Try: 20

Fri Jun  3 23:16:04 UTC 2022 Try: 21

Fri Jun  3 23:16:04 UTC 2022 Try: 22

Fri Jun  3 23:16:04 UTC 2022 Try: 23

Fri Jun  3 23:16:04 UTC 2022 Try: 24

Fri Jun  3 23:16:04 UTC 2022 Try: 25

Fri Jun  3 23:16:09 UTC 2022 Try: 26

Fri Jun  3 23:16:09 UTC 2022 Try: 27

Fri Jun  3 23:16:09 UTC 2022 Try: 28

Fri Jun  3 23:16:09 UTC 2022 Try: 29

Fri Jun  3 23:16:09 UTC 2022 Try: 30

Fri Jun  3 23:16:09 UTC 2022 Try: 31

Fri Jun  3 23:16:14 UTC 2022 Try: 32

Fri Jun  3 23:16:14 UTC 2022 Try: 33

Fri Jun  3 23:16:14 UTC 2022 Try: 34

Fri Jun  3 23:16:14 UTC 2022 Try: 35

Fri Jun  3 23:16:14 UTC 2022 Try: 36

Fri Jun  3 23:16:14 UTC 2022 Try: 37

Fri Jun  3 23:16:19 UTC 2022 Try: 38

Fri Jun  3 23:16:19 UTC 2022 Try: 39

Fri Jun  3 23:16:19 UTC 2022 Try: 40

Fri Jun  3 23:16:19 UTC 2022 Try: 41

Fri Jun  3 23:16:19 UTC 2022 Try: 42

Fri Jun  3 23:16:19 UTC 2022 Try: 43

Fri Jun  3 23:16:24 UTC 2022 Try: 44

Fri Jun  3 23:16:24 UTC 2022 Try: 45

Fri Jun  3 23:16:24 UTC 2022 Try: 46

Fri Jun  3 23:16:24 UTC 2022 Try: 47

Fri Jun  3 23:16:24 UTC 2022 Try: 48

Fri Jun  3 23:16:24 UTC 2022 Try: 49

Fri Jun  3 23:16:29 UTC 2022 Try: 50

Fri Jun  3 23:16:29 UTC 2022 Try: 51

Fri Jun  3 23:16:29 UTC 2022 Try: 52

Fri Jun  3 23:16:29 UTC 2022 Try: 53

Fri Jun  3 23:16:29 UTC 2022 Try: 54

Fri Jun  3 23:16:29 UTC 2022 Try: 55

Fri Jun  3 23:16:34 UTC 2022 Try: 56

Fri Jun  3 23:16:34 UTC 2022 Try: 57

Fri Jun  3 23:16:34 UTC 2022 Try: 58

Fri Jun  3 23:16:34 UTC 2022 Try: 59

Fri Jun  3 23:16:34 UTC 2022 Try: 60

Fri Jun  3 23:16:34 UTC 2022 Try: 61

Fri Jun  3 23:16:39 UTC 2022 Try: 62

Fri Jun  3 23:16:39 UTC 2022 Try: 63

Fri Jun  3 23:16:39 UTC 2022 Try: 64

Fri Jun  3 23:16:39 UTC 2022 Try: 65

Fri Jun  3 23:16:39 UTC 2022 Try: 66

Fri Jun  3 23:16:39 UTC 2022 Try: 67

Fri Jun  3 23:16:44 UTC 2022 Try: 68

Fri Jun  3 23:16:44 UTC 2022 Try: 69

Fri Jun  3 23:16:44 UTC 2022 Try: 70

Fri Jun  3 23:16:44 UTC 2022 Try: 71

Fri Jun  3 23:16:44 UTC 2022 Try: 72

Fri Jun  3 23:16:44 UTC 2022 Try: 73

Fri Jun  3 23:16:49 UTC 2022 Try: 74

Fri Jun  3 23:16:49 UTC 2022 Try: 75

Fri Jun  3 23:16:49 UTC 2022 Try: 76

Fri Jun  3 23:16:49 UTC 2022 Try: 77

Fri Jun  3 23:16:49 UTC 2022 Try: 78

Fri Jun  3 23:16:49 UTC 2022 Try: 79

Fri Jun  3 23:16:54 UTC 2022 Try: 80

Fri Jun  3 23:16:54 UTC 2022 Try: 81

Fri Jun  3 23:16:54 UTC 2022 Try: 82

Fri Jun  3 23:16:54 UTC 2022 Try: 83

Fri Jun  3 23:16:54 UTC 2022 Try: 84

Fri Jun  3 23:16:54 UTC 2022 Try: 85

Fri Jun  3 23:16:59 UTC 2022 Try: 86

Fri Jun  3 23:16:59 UTC 2022 Try: 87

Fri Jun  3 23:16:59 UTC 2022 Try: 88

Fri Jun  3 23:16:59 UTC 2022 Try: 89

Fri Jun  3 23:16:59 UTC 2022 Try: 90

Fri Jun  3 23:16:59 UTC 2022 Try: 91

Jun  3 23:17:01.396: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000703680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000703680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000703680, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-8013".
STEP: Found 8 events.
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:48 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:48 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 354.582349ms
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:49 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:49 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:54 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:54 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 337.411746ms
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:55 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Jun  3 23:17:01.400: INFO: At 2022-06-03 23:15:55 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Jun  3 23:17:01.402: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Jun  3 23:17:01.402: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:44 +0000 UTC  }]
Jun  3 23:17:01.402: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:50 +0000 UTC  }]
Jun  3 23:17:01.402: INFO: 
Jun  3 23:17:01.407: INFO: 
Logging node info for node master1
Jun  3 23:17:01.409: INFO: Node Info: &Node{ObjectMeta:{master1    4d289319-b343-4e96-a789-1a1cbeac007b 75380 0 2022-06-03 19:57:53 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:17:01.410: INFO: 
Logging kubelet events for node master1
Jun  3 23:17:01.413: INFO: 
Logging pods the kubelet thinks is on node master1
Jun  3 23:17:01.439: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:17:01.439: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container autoscaler ready: true, restart count 2
Jun  3 23:17:01.439: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container coredns ready: true, restart count 1
Jun  3 23:17:01.439: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container docker-registry ready: true, restart count 0
Jun  3 23:17:01.439: INFO: 	Container nginx ready: true, restart count 0
Jun  3 23:17:01.439: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun  3 23:17:01.439: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.439: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:17:01.439: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:17:01.439: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jun  3 23:17:01.439: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Init container install-cni ready: true, restart count 0
Jun  3 23:17:01.439: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:17:01.439: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.439: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:17:01.565: INFO: 
Latency metrics for node master1
Jun  3 23:17:01.565: INFO: 
Logging node info for node master2
Jun  3 23:17:01.567: INFO: Node Info: &Node{ObjectMeta:{master2    a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 75376 0 2022-06-03 19:58:21 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:53 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:53 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:53 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:16:53 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:17:01.568: INFO: 
Logging kubelet events for node master2
Jun  3 23:17:01.570: INFO: 
Logging pods the kubelet thinks is on node master2
Jun  3 23:17:01.584: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:17:01.584: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun  3 23:17:01.584: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-proxy ready: true, restart count 1
Jun  3 23:17:01.584: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.584: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun  3 23:17:01.584: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.584: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:17:01.584: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun  3 23:17:01.584: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:17:01.584: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:17:01.584: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.584: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:17:01.671: INFO: 
Latency metrics for node master2
Jun  3 23:17:01.671: INFO: 
Logging node info for node master3
Jun  3 23:17:01.673: INFO: Node Info: &Node{ObjectMeta:{master3    559b19e7-45b0-4589-9993-9bba259aae96 75377 0 2022-06-03 19:58:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:17:01.674: INFO: 
Logging kubelet events for node master3
Jun  3 23:17:01.676: INFO: 
Logging pods the kubelet thinks is on node master3
Jun  3 23:17:01.690: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:17:01.690: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:17:01.690: INFO: 	Container kube-flannel ready: true, restart count 2
Jun  3 23:17:01.690: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:17:01.690: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container nfd-controller ready: true, restart count 0
Jun  3 23:17:01.690: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.690: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:17:01.690: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun  3 23:17:01.690: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun  3 23:17:01.690: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:17:01.690: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.690: INFO: 	Container coredns ready: true, restart count 1
Jun  3 23:17:01.792: INFO: 
Latency metrics for node master3
Jun  3 23:17:01.792: INFO: 
Logging node info for node node1
Jun  3 23:17:01.795: INFO: Node Info: &Node{ObjectMeta:{node1    482ecf0f-7f88-436d-a313-227096fe8b8d 75382 0 2022-06-03 19:59:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:16:55 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:17:01.796: INFO: 
Logging kubelet events for node node1
Jun  3 23:17:01.798: INFO: 
Logging pods the kubelet thinks is on node node1
Jun  3 23:17:01.834: INFO: pod-client started at 2022-06-03 23:15:44 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container pod-client ready: true, restart count 0
Jun  3 23:17:01.834: INFO: service-proxy-toggled-kfkh4 started at 2022-06-03 23:15:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Jun  3 23:17:01.834: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:17:01.834: INFO: 	Container kube-flannel ready: true, restart count 3
Jun  3 23:17:01.834: INFO: service-proxy-disabled-c5mcc started at 2022-06-03 23:15:44 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun  3 23:17:01.834: INFO: up-down-2-crkcn started at 2022-06-03 23:15:02 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container up-down-2 ready: false, restart count 0
Jun  3 23:17:01.834: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun  3 23:17:01.834: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container discover ready: false, restart count 0
Jun  3 23:17:01.834: INFO: 	Container init ready: false, restart count 0
Jun  3 23:17:01.834: INFO: 	Container install ready: false, restart count 0
Jun  3 23:17:01.834: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container nfd-worker ready: true, restart count 0
Jun  3 23:17:01.834: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:17:01.834: INFO: up-down-3-z86dt started at 2022-06-03 23:16:11 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container up-down-3 ready: false, restart count 0
Jun  3 23:17:01.834: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container collectd ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun  3 23:17:01.834: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:17:01.834: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun  3 23:17:01.834: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun  3 23:17:01.834: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:17:01.834: INFO: startup-script started at 2022-06-03 23:15:47 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container startup-script ready: true, restart count 0
Jun  3 23:17:01.834: INFO: service-proxy-toggled-mjwhs started at 2022-06-03 23:15:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Jun  3 23:17:01.834: INFO: service-proxy-disabled-sprhn started at 2022-06-03 23:15:44 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun  3 23:17:01.834: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container config-reloader ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container grafana ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container prometheus ready: true, restart count 1
Jun  3 23:17:01.834: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container nodereport ready: true, restart count 0
Jun  3 23:17:01.834: INFO: 	Container reconcile ready: true, restart count 0
Jun  3 23:17:01.834: INFO: service-proxy-disabled-9hx8v started at 2022-06-03 23:15:44 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Jun  3 23:17:01.834: INFO: nodeport-update-service-m2lns started at 2022-06-03 23:15:33 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:01.834: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun  3 23:17:02.073: INFO: 
Latency metrics for node node1
Jun  3 23:17:02.073: INFO: 
Logging node info for node node2
Jun  3 23:17:02.076: INFO: Node Info: &Node{ObjectMeta:{node2    bb95e261-57f4-4e78-b1f6-cbf8d9287d74 75378 0 2022-06-03 19:59:32 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-03 22:38:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:16:54 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:17:02.077: INFO: 
Logging kubelet events for node node2
Jun  3 23:17:02.080: INFO: 
Logging pods the kubelet thinks is on node node2
Jun  3 23:17:02.100: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container kube-proxy ready: true, restart count 1
Jun  3 23:17:02.100: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container nfd-worker ready: true, restart count 0
Jun  3 23:17:02.100: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container collectd ready: true, restart count 0
Jun  3 23:17:02.100: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun  3 23:17:02.100: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun  3 23:17:02.100: INFO: up-down-3-npvsx started at 2022-06-03 23:16:11 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container up-down-3 ready: false, restart count 0
Jun  3 23:17:02.100: INFO: up-down-2-47dfc started at 2022-06-03 23:15:02 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container up-down-2 ready: false, restart count 0
Jun  3 23:17:02.100: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Jun  3 23:17:02.100: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container nodereport ready: true, restart count 0
Jun  3 23:17:02.100: INFO: 	Container reconcile ready: true, restart count 0
Jun  3 23:17:02.100: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:17:02.100: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:17:02.100: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun  3 23:17:02.100: INFO: up-down-3-zzfwv started at 2022-06-03 23:16:11 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container up-down-3 ready: false, restart count 0
Jun  3 23:17:02.100: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun  3 23:17:02.100: INFO: execpod75npz started at 2022-06-03 23:15:45 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container agnhost-container ready: true, restart count 0
Jun  3 23:17:02.100: INFO: boom-server started at 2022-06-03 23:15:39 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container boom-server ready: true, restart count 0
Jun  3 23:17:02.100: INFO: service-proxy-toggled-ppqmq started at 2022-06-03 23:15:56 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container service-proxy-toggled ready: false, restart count 0
Jun  3 23:17:02.100: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun  3 23:17:02.100: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container discover ready: false, restart count 0
Jun  3 23:17:02.100: INFO: 	Container init ready: false, restart count 0
Jun  3 23:17:02.100: INFO: 	Container install ready: false, restart count 0
Jun  3 23:17:02.100: INFO: pod-server-1 started at 2022-06-03 23:15:50 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container agnhost-container ready: true, restart count 0
Jun  3 23:17:02.100: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container tas-extender ready: true, restart count 0
Jun  3 23:17:02.100: INFO: nodeport-update-service-j7g4j started at 2022-06-03 23:15:33 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun  3 23:17:02.100: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:17:02.100: INFO: 	Init container install-cni ready: true, restart count 0
Jun  3 23:17:02.101: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:17:02.101: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:17:02.101: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:17:02.371: INFO: 
Latency metrics for node node2
Jun  3 23:17:02.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8013" for this suite.


• Failure [77.683 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Jun  3 23:17:01.396: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":683,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Jun  3 23:17:02.384: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun  3 23:15:33.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-4628
Jun  3 23:15:33.601: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-4628
I0603 23:15:33.613638      30 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-4628, replica count: 2
I0603 23:15:36.665119      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:39.665979      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:42.667011      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0603 23:15:45.669288      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun  3 23:15:45.669: INFO: Creating new exec pod
Jun  3 23:15:58.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jun  3 23:15:58.964: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:58.964: INFO: stdout: "nodeport-update-service-j7g4j"
Jun  3 23:15:58.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.29.135 80'
Jun  3 23:15:59.244: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.29.135 80\nConnection to 10.233.29.135 80 port [tcp/http] succeeded!\n"
Jun  3 23:15:59.244: INFO: stdout: "nodeport-update-service-m2lns"
Jun  3 23:15:59.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:15:59.901: INFO: rc: 1
Jun  3 23:15:59.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo+  hostName
nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:00.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:01.471: INFO: rc: 1
Jun  3 23:16:01.471: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:01.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:02.141: INFO: rc: 1
Jun  3 23:16:02.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:02.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:03.190: INFO: rc: 1
Jun  3 23:16:03.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:03.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:04.157: INFO: rc: 1
Jun  3 23:16:04.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31549
+ echo hostName
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:04.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:05.226: INFO: rc: 1
Jun  3 23:16:05.226: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:05.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:06.253: INFO: rc: 1
Jun  3 23:16:06.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:06.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:07.240: INFO: rc: 1
Jun  3 23:16:07.240: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:07.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:08.339: INFO: rc: 1
Jun  3 23:16:08.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:08.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:09.261: INFO: rc: 1
Jun  3 23:16:09.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:09.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:10.205: INFO: rc: 1
Jun  3 23:16:10.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:10.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:11.127: INFO: rc: 1
Jun  3 23:16:11.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:11.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:12.225: INFO: rc: 1
Jun  3 23:16:12.225: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:12.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:13.330: INFO: rc: 1
Jun  3 23:16:13.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:13.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:14.385: INFO: rc: 1
Jun  3 23:16:14.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:14.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:15.153: INFO: rc: 1
Jun  3 23:16:15.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:15.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:16.165: INFO: rc: 1
Jun  3 23:16:16.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:16.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:17.249: INFO: rc: 1
Jun  3 23:16:17.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:17.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:18.228: INFO: rc: 1
Jun  3 23:16:18.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:18.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:19.165: INFO: rc: 1
Jun  3 23:16:19.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:19.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:20.160: INFO: rc: 1
Jun  3 23:16:20.160: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:20.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:21.148: INFO: rc: 1
Jun  3 23:16:21.148: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:21.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:22.154: INFO: rc: 1
Jun  3 23:16:22.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:22.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:23.146: INFO: rc: 1
Jun  3 23:16:23.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:23.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:24.142: INFO: rc: 1
Jun  3 23:16:24.142: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:24.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:25.149: INFO: rc: 1
Jun  3 23:16:25.149: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:25.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:26.157: INFO: rc: 1
Jun  3 23:16:26.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:26.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:27.149: INFO: rc: 1
Jun  3 23:16:27.149: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:27.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:28.162: INFO: rc: 1
Jun  3 23:16:28.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:28.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:29.152: INFO: rc: 1
Jun  3 23:16:29.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:29.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:30.142: INFO: rc: 1
Jun  3 23:16:30.142: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:30.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:31.349: INFO: rc: 1
Jun  3 23:16:31.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:31.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:32.170: INFO: rc: 1
Jun  3 23:16:32.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:32.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:33.156: INFO: rc: 1
Jun  3 23:16:33.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:33.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:34.145: INFO: rc: 1
Jun  3 23:16:34.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:34.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:35.179: INFO: rc: 1
Jun  3 23:16:35.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:35.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:36.156: INFO: rc: 1
Jun  3 23:16:36.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:36.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:37.164: INFO: rc: 1
Jun  3 23:16:37.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:37.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:38.170: INFO: rc: 1
Jun  3 23:16:38.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:38.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:39.165: INFO: rc: 1
Jun  3 23:16:39.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:39.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:40.396: INFO: rc: 1
Jun  3 23:16:40.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:40.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:41.184: INFO: rc: 1
Jun  3 23:16:41.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:41.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:42.146: INFO: rc: 1
Jun  3 23:16:42.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:42.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:43.151: INFO: rc: 1
Jun  3 23:16:43.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:43.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:44.151: INFO: rc: 1
Jun  3 23:16:44.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:44.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:45.157: INFO: rc: 1
Jun  3 23:16:45.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:45.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:46.162: INFO: rc: 1
Jun  3 23:16:46.163: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:46.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:47.161: INFO: rc: 1
Jun  3 23:16:47.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:47.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:48.139: INFO: rc: 1
Jun  3 23:16:48.139: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:48.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:49.157: INFO: rc: 1
Jun  3 23:16:49.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:49.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:50.150: INFO: rc: 1
Jun  3 23:16:50.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:50.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:51.158: INFO: rc: 1
Jun  3 23:16:51.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:51.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:52.162: INFO: rc: 1
Jun  3 23:16:52.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:52.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:53.146: INFO: rc: 1
Jun  3 23:16:53.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:53.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:54.146: INFO: rc: 1
Jun  3 23:16:54.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:54.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:55.197: INFO: rc: 1
Jun  3 23:16:55.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:55.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:56.209: INFO: rc: 1
Jun  3 23:16:56.209: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:56.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:57.178: INFO: rc: 1
Jun  3 23:16:57.178: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:57.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:58.150: INFO: rc: 1
Jun  3 23:16:58.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:58.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:16:59.173: INFO: rc: 1
Jun  3 23:16:59.173: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:16:59.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:00.165: INFO: rc: 1
Jun  3 23:17:00.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:00.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:01.334: INFO: rc: 1
Jun  3 23:17:01.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:01.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:02.193: INFO: rc: 1
Jun  3 23:17:02.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:02.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:03.154: INFO: rc: 1
Jun  3 23:17:03.154: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:03.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:04.155: INFO: rc: 1
Jun  3 23:17:04.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:04.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:05.153: INFO: rc: 1
Jun  3 23:17:05.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:05.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:06.146: INFO: rc: 1
Jun  3 23:17:06.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:06.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:07.143: INFO: rc: 1
Jun  3 23:17:07.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo+  hostName
nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:07.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:08.194: INFO: rc: 1
Jun  3 23:17:08.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:08.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:09.136: INFO: rc: 1
Jun  3 23:17:09.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:09.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:10.150: INFO: rc: 1
Jun  3 23:17:10.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:10.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:11.177: INFO: rc: 1
Jun  3 23:17:11.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:11.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:12.148: INFO: rc: 1
Jun  3 23:17:12.148: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:12.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:13.151: INFO: rc: 1
Jun  3 23:17:13.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:13.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:14.156: INFO: rc: 1
Jun  3 23:17:14.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:14.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:15.150: INFO: rc: 1
Jun  3 23:17:15.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:15.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:16.144: INFO: rc: 1
Jun  3 23:17:16.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:16.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:17.165: INFO: rc: 1
Jun  3 23:17:17.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:17.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:18.158: INFO: rc: 1
Jun  3 23:17:18.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:18.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:19.157: INFO: rc: 1
Jun  3 23:17:19.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:19.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:20.143: INFO: rc: 1
Jun  3 23:17:20.143: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:20.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:21.154: INFO: rc: 1
Jun  3 23:17:21.154: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:21.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:22.145: INFO: rc: 1
Jun  3 23:17:22.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:22.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:23.149: INFO: rc: 1
Jun  3 23:17:23.149: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:23.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:24.174: INFO: rc: 1
Jun  3 23:17:24.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:24.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:25.162: INFO: rc: 1
Jun  3 23:17:25.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31549
+ echo hostName
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:25.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:26.165: INFO: rc: 1
Jun  3 23:17:26.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:26.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:27.166: INFO: rc: 1
Jun  3 23:17:27.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:27.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:28.167: INFO: rc: 1
Jun  3 23:17:28.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:28.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:29.159: INFO: rc: 1
Jun  3 23:17:29.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:29.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:30.161: INFO: rc: 1
Jun  3 23:17:30.161: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:30.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:31.336: INFO: rc: 1
Jun  3 23:17:31.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:31.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:32.137: INFO: rc: 1
Jun  3 23:17:32.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:32.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:33.171: INFO: rc: 1
Jun  3 23:17:33.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:33.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:34.152: INFO: rc: 1
Jun  3 23:17:34.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:34.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:35.147: INFO: rc: 1
Jun  3 23:17:35.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:35.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:36.177: INFO: rc: 1
Jun  3 23:17:36.177: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:36.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:37.151: INFO: rc: 1
Jun  3 23:17:37.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:37.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:38.156: INFO: rc: 1
Jun  3 23:17:38.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:38.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:39.156: INFO: rc: 1
Jun  3 23:17:39.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:39.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:40.137: INFO: rc: 1
Jun  3 23:17:40.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:40.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:41.152: INFO: rc: 1
Jun  3 23:17:41.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:41.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:42.157: INFO: rc: 1
Jun  3 23:17:42.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:42.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:43.132: INFO: rc: 1
Jun  3 23:17:43.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:43.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:44.155: INFO: rc: 1
Jun  3 23:17:44.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:44.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:45.145: INFO: rc: 1
Jun  3 23:17:45.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:45.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:46.150: INFO: rc: 1
Jun  3 23:17:46.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:46.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:47.150: INFO: rc: 1
Jun  3 23:17:47.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:47.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:48.129: INFO: rc: 1
Jun  3 23:17:48.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:48.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:49.165: INFO: rc: 1
Jun  3 23:17:49.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:49.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:50.128: INFO: rc: 1
Jun  3 23:17:50.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:50.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:51.128: INFO: rc: 1
Jun  3 23:17:51.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:51.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:52.144: INFO: rc: 1
Jun  3 23:17:52.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:52.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:53.150: INFO: rc: 1
Jun  3 23:17:53.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:53.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:54.154: INFO: rc: 1
Jun  3 23:17:54.154: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:54.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:55.159: INFO: rc: 1
Jun  3 23:17:55.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:55.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:56.155: INFO: rc: 1
Jun  3 23:17:56.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:56.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:57.127: INFO: rc: 1
Jun  3 23:17:57.127: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:57.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:58.136: INFO: rc: 1
Jun  3 23:17:58.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ + echonc hostName
 -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:58.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:17:59.159: INFO: rc: 1
Jun  3 23:17:59.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:17:59.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:18:00.145: INFO: rc: 1
Jun  3 23:18:00.145: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:18:00.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549'
Jun  3 23:18:00.394: INFO: rc: 1
Jun  3 23:18:00.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4628 exec execpod75npz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31549:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31549
nc: connect to 10.10.190.207 port 31549 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun  3 23:18:00.395: FAIL: Unexpected error:
    <*errors.errorString | 0xc000446bc0>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31549 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31549 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000a3af00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000a3af00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000a3af00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Jun  3 23:18:00.396: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-4628".
STEP: Found 17 events.
Jun  3 23:18:00.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod75npz: { } Scheduled: Successfully assigned services-4628/execpod75npz to node2
Jun  3 23:18:00.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-j7g4j: { } Scheduled: Successfully assigned services-4628/nodeport-update-service-j7g4j to node2
Jun  3 23:18:00.418: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-m2lns: { } Scheduled: Successfully assigned services-4628/nodeport-update-service-m2lns to node1
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:33 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-j7g4j
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:33 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-m2lns
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:35 +0000 UTC - event for nodeport-update-service-m2lns: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:35 +0000 UTC - event for nodeport-update-service-m2lns: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 270.686739ms
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:35 +0000 UTC - event for nodeport-update-service-m2lns: {kubelet node1} Created: Created container nodeport-update-service
Jun  3 23:18:00.418: INFO: At 2022-06-03 23:15:35 +0000 UTC - event for nodeport-update-service-m2lns: {kubelet node1} Started: Started container nodeport-update-service
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:36 +0000 UTC - event for nodeport-update-service-j7g4j: {kubelet node2} Created: Created container nodeport-update-service
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:36 +0000 UTC - event for nodeport-update-service-j7g4j: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:36 +0000 UTC - event for nodeport-update-service-j7g4j: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 267.687556ms
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:37 +0000 UTC - event for nodeport-update-service-j7g4j: {kubelet node2} Started: Started container nodeport-update-service
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:48 +0000 UTC - event for execpod75npz: {kubelet node2} Created: Created container agnhost-container
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:48 +0000 UTC - event for execpod75npz: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:48 +0000 UTC - event for execpod75npz: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 266.647374ms
Jun  3 23:18:00.419: INFO: At 2022-06-03 23:15:49 +0000 UTC - event for execpod75npz: {kubelet node2} Started: Started container agnhost-container
Jun  3 23:18:00.421: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Jun  3 23:18:00.421: INFO: execpod75npz                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:45 +0000 UTC  }]
Jun  3 23:18:00.421: INFO: nodeport-update-service-j7g4j  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:33 +0000 UTC  }]
Jun  3 23:18:00.422: INFO: nodeport-update-service-m2lns  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-03 23:15:33 +0000 UTC  }]
Jun  3 23:18:00.422: INFO: 
Jun  3 23:18:00.426: INFO: 
Logging node info for node master1
Jun  3 23:18:00.429: INFO: Node Info: &Node{ObjectMeta:{master1    4d289319-b343-4e96-a789-1a1cbeac007b 75734 0 2022-06-03 19:57:53 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:57:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-03 19:58:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-03 20:05:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:30 +0000 UTC,LastTransitionTime:2022-06-03 20:03:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 20:00:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3d668405f73a457bb0bcb4df5f4edac8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:c08279e3-a5cb-4f4d-b9f0-f2cde655469f,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:18:00.430: INFO: 
Logging kubelet events for node master1
Jun  3 23:18:00.432: INFO: 
Logging pods the kubelet thinks is on node master1
Jun  3 23:18:00.454: INFO: kube-flannel-m8sj7 started at 2022-06-03 20:00:31 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Init container install-cni ready: true, restart count 0
Jun  3 23:18:00.454: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:18:00.454: INFO: kube-multus-ds-amd64-n58qk started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:18:00.454: INFO: node-exporter-45rhg started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.454: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:18:00.454: INFO: kube-apiserver-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:18:00.454: INFO: kube-controller-manager-master1 started at 2022-06-03 19:58:57 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-controller-manager ready: true, restart count 1
Jun  3 23:18:00.454: INFO: coredns-8474476ff8-rvc4v started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container coredns ready: true, restart count 1
Jun  3 23:18:00.454: INFO: container-registry-65d7c44b96-2nzvn started at 2022-06-03 20:05:02 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container docker-registry ready: true, restart count 0
Jun  3 23:18:00.454: INFO: 	Container nginx ready: true, restart count 0
Jun  3 23:18:00.454: INFO: kube-scheduler-master1 started at 2022-06-03 20:06:52 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun  3 23:18:00.454: INFO: kube-proxy-zgchh started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:18:00.454: INFO: dns-autoscaler-7df78bfcfb-vdtpl started at 2022-06-03 20:01:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.454: INFO: 	Container autoscaler ready: true, restart count 2
Jun  3 23:18:00.540: INFO: 
Latency metrics for node master1
Jun  3 23:18:00.541: INFO: 
Logging node info for node master2
Jun  3 23:18:00.543: INFO: Node Info: &Node{ObjectMeta:{master2    a6ae2f0e-af0f-4dbb-a8e5-6d3a309310bc 75730 0 2022-06-03 19:58:21 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-03 20:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:28 +0000 UTC,LastTransitionTime:2022-06-03 20:03:28 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 20:00:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:21e5c20b6e4a4d3fb07443d5575db572,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:52401484-5222-49a3-a465-e7215ade9b1e,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:18:00.543: INFO: 
Logging kubelet events for node master2
Jun  3 23:18:00.546: INFO: 
Logging pods the kubelet thinks is on node master2
Jun  3 23:18:00.554: INFO: prometheus-operator-585ccfb458-xp2lz started at 2022-06-03 20:13:21 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.554: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun  3 23:18:00.554: INFO: node-exporter-2h6sb started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.554: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:18:00.554: INFO: kube-apiserver-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:18:00.554: INFO: kube-controller-manager-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun  3 23:18:00.554: INFO: kube-proxy-nlc58 started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-proxy ready: true, restart count 1
Jun  3 23:18:00.554: INFO: kube-scheduler-master2 started at 2022-06-03 19:58:55 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun  3 23:18:00.554: INFO: kube-flannel-sbdcv started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:18:00.554: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:18:00.554: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:18:00.554: INFO: kube-multus-ds-amd64-ccvdq started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.555: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:18:00.631: INFO: 
Latency metrics for node master2
Jun  3 23:18:00.631: INFO: 
Logging node info for node master3
Jun  3 23:18:00.634: INFO: Node Info: &Node{ObjectMeta:{master3    559b19e7-45b0-4589-9993-9bba259aae96 75731 0 2022-06-03 19:58:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-03 19:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-03 20:00:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-03 20:08:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-03 20:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:22 +0000 UTC,LastTransitionTime:2022-06-03 20:03:22 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 19:58:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:17:54 +0000 UTC,LastTransitionTime:2022-06-03 20:03:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5b399eed918a40dd8324debc1c0777a3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:2fde35f0-2dc9-4531-9d2b-0bd4a6516b3a,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:18:00.634: INFO: 
Logging kubelet events for node master3
Jun  3 23:18:00.636: INFO: 
Logging pods the kubelet thinks is on node master3
Jun  3 23:18:00.645: INFO: coredns-8474476ff8-dvwn7 started at 2022-06-03 20:01:07 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container coredns ready: true, restart count 1
Jun  3 23:18:00.646: INFO: node-exporter-jn8vv started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.646: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:18:00.646: INFO: kube-controller-manager-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun  3 23:18:00.646: INFO: kube-scheduler-master3 started at 2022-06-03 19:58:27 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-scheduler ready: true, restart count 3
Jun  3 23:18:00.646: INFO: kube-proxy-m8r9n started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:18:00.646: INFO: node-feature-discovery-controller-cff799f9f-8fbbp started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container nfd-controller ready: true, restart count 0
Jun  3 23:18:00.646: INFO: kube-apiserver-master3 started at 2022-06-03 20:03:18 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun  3 23:18:00.646: INFO: kube-flannel-nx64t started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:18:00.646: INFO: 	Container kube-flannel ready: true, restart count 2
Jun  3 23:18:00.646: INFO: kube-multus-ds-amd64-gjv49 started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.646: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:18:00.733: INFO: 
Latency metrics for node master3
Jun  3 23:18:00.733: INFO: 
Logging node info for node node1
Jun  3 23:18:00.737: INFO: Node Info: &Node{ObjectMeta:{node1    482ecf0f-7f88-436d-a313-227096fe8b8d 75736 0 2022-06-03 19:59:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-03 19:59:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:39 +0000 UTC,LastTransitionTime:2022-06-03 20:03:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:56 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:56 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:56 +0000 UTC,LastTransitionTime:2022-06-03 19:59:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:17:56 +0000 UTC,LastTransitionTime:2022-06-03 20:00:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d7b1fa7572024d5cac9eec5f4f2a75d3,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:a1aa46cd-ec2c-417b-ae44-b808bdc04113,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003977815,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:18:00.738: INFO: 
Logging kubelet events for node node1
Jun  3 23:18:00.740: INFO: 
Logging pods the kubelet thinks is on node node1
Jun  3 23:18:00.753: INFO: prometheus-k8s-0 started at 2022-06-03 20:13:45 +0000 UTC (0+4 container statuses recorded)
Jun  3 23:18:00.753: INFO: 	Container config-reloader ready: true, restart count 0
Jun  3 23:18:00.753: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun  3 23:18:00.753: INFO: 	Container grafana ready: true, restart count 0
Jun  3 23:18:00.753: INFO: 	Container prometheus ready: true, restart count 1
Jun  3 23:18:00.753: INFO: cmk-84nbw started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.753: INFO: 	Container nodereport ready: true, restart count 0
Jun  3 23:18:00.753: INFO: 	Container reconcile ready: true, restart count 0
Jun  3 23:18:00.753: INFO: nodeport-update-service-m2lns started at 2022-06-03 23:15:33 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun  3 23:18:00.754: INFO: kube-flannel-hm6bh started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Init container install-cni ready: true, restart count 2
Jun  3 23:18:00.754: INFO: 	Container kube-flannel ready: true, restart count 3
Jun  3 23:18:00.754: INFO: nginx-proxy-node1 started at 2022-06-03 19:59:31 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun  3 23:18:00.754: INFO: cmk-init-discover-node1-n75dv started at 2022-06-03 20:11:42 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container discover ready: false, restart count 0
Jun  3 23:18:00.754: INFO: 	Container init ready: false, restart count 0
Jun  3 23:18:00.754: INFO: 	Container install ready: false, restart count 0
Jun  3 23:18:00.754: INFO: node-feature-discovery-worker-rg6tx started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container nfd-worker ready: true, restart count 0
Jun  3 23:18:00.754: INFO: node-exporter-f5xkq started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.754: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:18:00.754: INFO: collectd-nbx5z started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container collectd ready: true, restart count 0
Jun  3 23:18:00.754: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun  3 23:18:00.754: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.754: INFO: kube-multus-ds-amd64-p7r6j started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:18:00.754: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun  3 23:18:00.754: INFO: cmk-webhook-6c9d5f8578-c927x started at 2022-06-03 20:12:25 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun  3 23:18:00.754: INFO: kube-proxy-b6zlv started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.754: INFO: 	Container kube-proxy ready: true, restart count 2
Jun  3 23:18:00.877: INFO: 
Latency metrics for node node1
Jun  3 23:18:00.877: INFO: 
Logging node info for node node2
Jun  3 23:18:00.881: INFO: Node Info: &Node{ObjectMeta:{node2    bb95e261-57f4-4e78-b1f6-cbf8d9287d74 75732 0 2022-06-03 19:59:32 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-03 19:59:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-03 19:59:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-03 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-03 20:08:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-03 20:12:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-03 22:19:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-03 22:38:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-03 20:03:25 +0000 UTC,LastTransitionTime:2022-06-03 20:03:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 19:59:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-03 23:17:55 +0000 UTC,LastTransitionTime:2022-06-03 20:03:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:73f6f7c4482d4ddfadf38b35a5d03575,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:14b04379-324d-413e-8b7f-b1dff077c955,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.16,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:196eade72a7e16bdb2d709d29fdec354c8a3dbbb68e384608929b41c5ec41520 localhost:30500/cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727687199,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:bec5a478455b8244d18398355b5ec18540557180ddc029404300ca241638521b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:eddd5e176ac5f79e2e8ba9a1b7023bbf7200edfa835da39de54a6bf3568f9668 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun  3 23:18:00.884: INFO: 
Logging kubelet events for node node2
Jun  3 23:18:00.890: INFO: 
Logging pods the kubelet thinks is on node node2
Jun  3 23:18:00.900: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt started at 2022-06-03 20:09:20 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun  3 23:18:00.900: INFO: execpod75npz started at 2022-06-03 23:15:45 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container agnhost-container ready: true, restart count 0
Jun  3 23:18:00.900: INFO: nginx-proxy-node2 started at 2022-06-03 19:59:32 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun  3 23:18:00.900: INFO: cmk-init-discover-node2-xvf8p started at 2022-06-03 20:12:02 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container discover ready: false, restart count 0
Jun  3 23:18:00.900: INFO: 	Container init ready: false, restart count 0
Jun  3 23:18:00.900: INFO: 	Container install ready: false, restart count 0
Jun  3 23:18:00.900: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 started at 2022-06-03 20:16:39 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container tas-extender ready: true, restart count 0
Jun  3 23:18:00.900: INFO: nodeport-update-service-j7g4j started at 2022-06-03 23:15:33 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun  3 23:18:00.900: INFO: kube-flannel-pc7wj started at 2022-06-03 20:00:32 +0000 UTC (1+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Init container install-cni ready: true, restart count 0
Jun  3 23:18:00.900: INFO: 	Container kube-flannel ready: true, restart count 1
Jun  3 23:18:00.900: INFO: kube-multus-ds-amd64-n7spl started at 2022-06-03 20:00:40 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container kube-multus ready: true, restart count 1
Jun  3 23:18:00.900: INFO: kube-proxy-qmkcq started at 2022-06-03 19:59:36 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.900: INFO: 	Container kube-proxy ready: true, restart count 1
Jun  3 23:18:00.900: INFO: node-feature-discovery-worker-gn855 started at 2022-06-03 20:08:09 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container nfd-worker ready: true, restart count 0
Jun  3 23:18:00.901: INFO: collectd-q2l4t started at 2022-06-03 20:17:32 +0000 UTC (0+3 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container collectd ready: true, restart count 0
Jun  3 23:18:00.901: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun  3 23:18:00.901: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.901: INFO: kubernetes-dashboard-785dcbb76d-25c95 started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Jun  3 23:18:00.901: INFO: cmk-v446x started at 2022-06-03 20:12:24 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container nodereport ready: true, restart count 0
Jun  3 23:18:00.901: INFO: 	Container reconcile ready: true, restart count 0
Jun  3 23:18:00.901: INFO: node-exporter-g45bm started at 2022-06-03 20:13:28 +0000 UTC (0+2 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun  3 23:18:00.901: INFO: 	Container node-exporter ready: true, restart count 0
Jun  3 23:18:00.901: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn started at 2022-06-03 20:01:12 +0000 UTC (0+1 container statuses recorded)
Jun  3 23:18:00.901: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun  3 23:18:01.603: INFO: 
Latency metrics for node node2
Jun  3 23:18:01.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4628" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [148.040 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Jun  3 23:18:00.395: Unexpected error:
      <*errors.errorString | 0xc000446bc0>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31549 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31549 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":3,"skipped":378,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Jun  3 23:18:01.624: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":2,"skipped":286,"failed":0}
Jun  3 23:16:47.376: INFO: Running AfterSuite actions on all nodes
Jun  3 23:18:01.690: INFO: Running AfterSuite actions on node 1
Jun  3 23:18:01.690: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 26 of 5773 Specs in 233.623 seconds
FAIL! -- 24 Passed | 2 Failed | 0 Pending | 5747 Skipped


Ginkgo ran 1 suite in 3m55.327094011s
Test Suite Failed