Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655507940 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Jun 17 23:19:02.578: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.583: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 17 23:19:02.615: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:19:02.675: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:19:02.675: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:19:02.675: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:19:02.675: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:19:02.675: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 17 23:19:02.693: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 17 23:19:02.693: INFO: e2e test version: v1.21.9 Jun 17 23:19:02.694: INFO: kube-apiserver version: v1.21.1 Jun 17 23:19:02.695: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.701: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Jun 17 23:19:02.706: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.730: INFO: Cluster IP family: ipv4 S ------------------------------ Jun 17 23:19:02.705: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.732: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Jun 17 23:19:02.719: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.739: INFO: Cluster IP family: ipv4 SSS ------------------------------ Jun 17 23:19:02.716: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.740: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Jun 17 23:19:02.725: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.750: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Jun 17 23:19:02.731: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.752: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ Jun 17 23:19:02.736: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.761: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Jun 17 23:19:02.739: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.763: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Jun 17 23:19:02.746: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:19:02.768: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:19:02.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W0617 23:19:02.974816 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:19:02.975: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:19:02.976: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Jun 17 23:19:02.992: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 17 23:19:03.165: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:03.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1697" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
W0617 23:19:03.718975      28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.719: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.722: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Jun 17 23:19:03.726: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:03.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9818" for this suite.


S [SKIPPING] [0.047 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0617 23:19:03.453699      32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.454: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.455: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Jun 17 23:19:03.591: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-8593" to be "Succeeded or Failed"
Jun 17 23:19:03.593: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297679ms
Jun 17 23:19:05.597: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006038567s
Jun 17 23:19:07.600: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008715929s
Jun 17 23:19:09.605: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013450259s
Jun 17 23:19:11.610: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018491687s
Jun 17 23:19:13.615: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024377466s
STEP: Saw pod success
Jun 17 23:19:13.615: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:13.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8593" for this suite.


• [SLOW TEST:10.195 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":253,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:13.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-8898
STEP: changing service nodeport-range-test to out-of-range NodePort 40514
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 40514
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8898" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":2,"skipped":280,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:02.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
W0617 23:19:02.888338      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:02.888: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:02.891: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5558.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5558.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5558.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5558.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 17 23:19:22.956: INFO: DNS probes using dns-5558/dns-test-186562aa-b144-4a25-ab48-a6f2f6e43e52 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:22.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5558" for this suite.


• [SLOW TEST:20.126 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0617 23:19:03.076204      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.076: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.078: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-8786
Jun 17 23:19:03.085: INFO: hairpin-test cluster ip: 10.233.45.6
STEP: creating a client/server pod
Jun 17 23:19:03.097: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:05.101: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:07.101: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:09.104: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:11.101: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:13.102: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:15.104: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:17.105: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:19.102: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:21.101: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:23.100: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-8786 to expose endpoints map[hairpin:[8080]]
Jun 17 23:19:23.108: INFO: successfully validated that service hairpin-test in namespace services-8786 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Jun 17 23:19:24.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8786 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Jun 17 23:19:25.239: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Jun 17 23:19:25.239: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 17 23:19:25.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8786 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.45.6 8080'
Jun 17 23:19:25.784: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.45.6 8080\nConnection to 10.233.45.6 8080 port [tcp/http-alt] succeeded!\n"
Jun 17 23:19:25.784: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:25.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8786" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:22.739 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":98,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:02.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
W0617 23:19:02.949214      26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:02.949: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:02.951: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Jun 17 23:19:02.968: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-13506768-ba3f-4db6-a817-e639f71f9c7c  dns-1703  bfe76142-f854-44a6-bf23-f04b6891d0cf 71729 0 2022-06-17 23:19:02 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-17 23:19:02 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-9dnkh,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-frqpn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-frqpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 17 23:19:18.978: INFO: testServerIP is 10.244.3.92
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jun 17 23:19:18.989: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-1703  6866b27e-51b5-4152-9329-5f1eaa088e06 72116 0 2022-06-17 23:19:18 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-06-17 23:19:18 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vn7d8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vn7d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.92],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Jun 17 23:19:29.000: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-1703 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:19:29.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Jun 17 23:19:29.185: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-1703 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:19:29.185: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:19:29.318: INFO: Deleting pod e2e-dns-utils...
Jun 17 23:19:29.324: INFO: Deleting pod e2e-configmap-dns-server-13506768-ba3f-4db6-a817-e639f71f9c7c...
Jun 17 23:19:29.330: INFO: Deleting configmap e2e-coredns-configmap-9dnkh...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:29.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1703" for this suite.


• [SLOW TEST:26.415 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":41,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:29.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Jun 17 23:19:29.567: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Jun 17 23:19:29.573: INFO: starting watch
STEP: patching
STEP: updating
Jun 17 23:19:29.584: INFO: waiting for watch events with expected annotations
Jun 17 23:19:29.584: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Jun 17 23:19:29.585: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:29.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-6153" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":115,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0617 23:19:03.466416      36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.466: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.469: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-3538
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:03.596: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:03.628: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:05.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:07.631: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:09.632: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:11.631: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:13.632: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:15.634: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:17.632: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:19.634: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:21.632: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:23.633: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:23.638: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:25.642: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:31.662: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:31.662: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:31.669: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:31.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3538" for this suite.


S [SKIPPING] [28.235 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:31.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-4339
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-4339
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:31.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4339" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":320,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-4510
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:03.878: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:03.914: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:05.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:07.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:09.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:11.919: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:13.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:15.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:17.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:19.923: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:21.918: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:23.921: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:25.918: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:25.922: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:27.925: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:31.947: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:31.947: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:31.953: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:31.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4510" for this suite.


S [SKIPPING] [28.217 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-9192
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:03.716: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:03.754: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:05.760: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:07.759: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:09.760: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:11.759: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:13.758: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:15.761: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:17.758: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:19.759: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:21.759: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:23.758: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:23.764: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:25.767: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:33.800: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:33.800: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:33.807: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:33.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9192" for this suite.


S [SKIPPING] [30.230 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0617 23:19:03.773240      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.773: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.775: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-8716
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:03.877: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:03.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:05.913: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:07.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:09.916: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:11.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:13.915: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:15.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:17.916: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:19.919: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:21.914: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:23.914: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:23.920: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:25.923: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:35.955: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:35.955: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:35.962: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:35.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8716" for this suite.


S [SKIPPING] [32.227 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:23.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-1466
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:23.222: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:23.252: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:25.256: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:27.260: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:29.259: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:31.256: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:33.255: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:35.258: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:37.257: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:39.259: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:41.256: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:43.256: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:45.257: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:45.263: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:51.286: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:51.286: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:51.293: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:51.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1466" for this suite.


S [SKIPPING] [28.223 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:32.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Jun 17 23:19:32.131: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:34.135: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:36.136: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:38.135: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun 17 23:19:38.151: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:40.155: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:42.156: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:44.155: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:46.155: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Jun 17 23:19:48.173: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:50.179: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:52.177: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Jun 17 23:19:52.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-8950 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Jun 17 23:19:53.092: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Jun 17 23:19:53.092: INFO: stdout: "tcp      6 3597 CLOSE_WAIT src=10.244.4.83 dst=10.10.190.208 sport=45280 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=29798 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Jun 17 23:19:53.092: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3597 CLOSE_WAIT src=10.244.4.83 dst=10.10.190.208 sport=45280 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=29798 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:53.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-8950" for this suite.


• [SLOW TEST:21.007 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":428,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:29.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-5536
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:29.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:29.794: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:31.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:33.798: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:35.799: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:37.798: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:39.803: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:41.799: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:43.799: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:45.798: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:47.798: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:49.801: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:49.806: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:51.811: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:55.831: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:55.831: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:55.838: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:55.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5536" for this suite.


S [SKIPPING] [26.216 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:51.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Jun 17 23:19:51.419: INFO: Creating new exec pod
Jun 17 23:19:55.437: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Jun 17 23:19:55.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6467 exec execpod-noendpointsrd2wx -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jun 17 23:19:56.817: INFO: rc: 1
Jun 17 23:19:56.817: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6467 exec execpod-noendpointsrd2wx -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:56.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6467" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:5.441 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:31.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-6439
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:32.087: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:32.117: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:34.121: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:36.121: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:38.121: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:40.123: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:42.123: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:44.126: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:46.121: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:48.120: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:50.121: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:52.121: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:54.124: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:54.130: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:19:58.154: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:19:58.154: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:58.162: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:19:58.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6439" for this suite.


S [SKIPPING] [26.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:34.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-5326
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:34.251: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:34.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:36.287: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:38.288: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:40.288: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:42.288: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:44.288: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:46.289: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:48.287: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:50.288: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:52.287: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:54.288: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:19:54.294: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:19:56.298: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:04.322: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:04.322: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:04.329: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:04.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5326" for this suite.


S [SKIPPING] [30.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:37.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-7383
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:37.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:37.383: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:39.386: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:41.386: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:43.389: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:45.388: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:47.388: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:49.389: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:51.387: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:53.387: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:55.387: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:57.387: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:19:59.390: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:01.387: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:01.392: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:11.413: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:11.413: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:11.421: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:11.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7383" for this suite.


S [SKIPPING] [34.233 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:04.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-2945
STEP: deleting original service nodeport-reuse
Jun 17 23:20:04.901: INFO: Creating new host exec pod
Jun 17 23:20:04.915: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:06.918: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:08.922: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:10.919: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:12.919: INFO: The status of Pod hostexec is Running (Ready = true)
Jun 17 23:20:12.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2945 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30468' | tail -n +2 | grep LISTEN'
Jun 17 23:20:13.486: INFO: stderr: "+ tail -n +2\n+ ss -ant46 'sport = :30468'\n+ grep LISTEN\n"
Jun 17 23:20:13.486: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 30468
STEP: deleting service nodeport-reuse in namespace services-2945
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:13.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2945" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:8.659 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":702,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:13.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun 17 23:20:13.725: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:13.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-9255" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:53.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Jun 17 23:19:53.150: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:55.153: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:57.155: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Jun 17 23:19:57.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5349 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jun 17 23:19:57.647: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Jun 17 23:19:57.647: INFO: stdout: "iptables"
Jun 17 23:19:57.647: INFO: proxyMode: iptables
Jun 17 23:19:57.655: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jun 17 23:19:57.657: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-5349
Jun 17 23:19:57.664: INFO: sourceip-test cluster ip: 10.233.24.59
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Jun 17 23:19:57.680: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:59.687: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:01.685: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:03.685: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:05.684: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:07.684: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-5349 to expose endpoints map[echo-sourceip:[8080]]
Jun 17 23:20:07.696: INFO: successfully validated that service sourceip-test in namespace services-5349 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Jun 17 23:20:07.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Jun 17 23:20:09.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-b6dfbb54b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:11.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104807, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-b6dfbb54b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:13.717: INFO: Waiting up to 2m0s to get response from 10.233.24.59:8080
Jun 17 23:20:13.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5349 exec pause-pod-b6dfbb54b-jqkkh -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.24.59:8080/clientip'
Jun 17 23:20:14.050: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.24.59:8080/clientip\n"
Jun 17 23:20:14.050: INFO: stdout: "10.244.3.119:36056"
STEP: Verifying the preserved source ip
Jun 17 23:20:14.050: INFO: Waiting up to 2m0s to get response from 10.233.24.59:8080
Jun 17 23:20:14.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5349 exec pause-pod-b6dfbb54b-qc2zt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.24.59:8080/clientip'
Jun 17 23:20:15.738: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.24.59:8080/clientip\n"
Jun 17 23:20:15.738: INFO: stdout: "10.244.4.91:43834"
STEP: Verifying the preserved source ip
Jun 17 23:20:15.738: INFO: Deleting deployment
Jun 17 23:20:15.744: INFO: Cleaning up the echo server pod
Jun 17 23:20:15.750: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:15.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5349" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:22.652 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":3,"skipped":432,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:16.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 17 23:20:16.071: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:16.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7412" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:16.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Jun 17 23:20:16.145: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Jun 17 23:20:16.149: INFO: starting watch
STEP: patching
STEP: updating
Jun 17 23:20:16.156: INFO: waiting for watch events with expected annotations
Jun 17 23:20:16.156: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Jun 17 23:20:16.156: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:16.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-9648" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":587,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:16.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 17 23:20:16.227: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:16.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5494" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:16.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Jun 17 23:20:16.338: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Jun 17 23:20:16.443: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:16.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8408" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.143 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:58.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-9498
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:58.579: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:58.611: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:00.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:02.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:04.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:06.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:08.617: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:10.618: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:12.614: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:14.615: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:16.615: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:18.615: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:20.616: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:20.620: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:26.653: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:26.653: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:26.660: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:26.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9498" for this suite.


S [SKIPPING] [28.206 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:56.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-2228
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:19:56.939: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:19:56.970: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:58.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:00.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:02.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:04.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:06.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:08.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:10.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:12.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:14.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:16.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:18.974: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:20.974: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:20.978: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:26.998: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:26.998: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:27.005: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:27.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2228" for this suite.


S [SKIPPING] [30.183 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:14.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-435
STEP: creating a client pod for probing the service svc-udp
Jun 17 23:19:14.507: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:16.510: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:18.511: INFO: The status of Pod pod-client is Running (Ready = true)
Jun 17 23:19:18.531: INFO: Pod client logs: Fri Jun 17 23:19:16 UTC 2022
Fri Jun 17 23:19:16 UTC 2022 Try: 1

Fri Jun 17 23:19:16 UTC 2022 Try: 2

Fri Jun 17 23:19:16 UTC 2022 Try: 3

Fri Jun 17 23:19:16 UTC 2022 Try: 4

Fri Jun 17 23:19:16 UTC 2022 Try: 5

Fri Jun 17 23:19:16 UTC 2022 Try: 6

Fri Jun 17 23:19:16 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Jun 17 23:19:18.545: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:20.549: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:22.549: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:24.549: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:26.549: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:28.548: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:30.551: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:32.548: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-435 to expose endpoints map[pod-server-1:[80]]
Jun 17 23:19:32.558: INFO: successfully validated that service svc-udp in namespace conntrack-435 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Jun 17 23:20:32.583: INFO: Pod client logs: Fri Jun 17 23:19:16 UTC 2022
Fri Jun 17 23:19:16 UTC 2022 Try: 1

Fri Jun 17 23:19:16 UTC 2022 Try: 2

Fri Jun 17 23:19:16 UTC 2022 Try: 3

Fri Jun 17 23:19:16 UTC 2022 Try: 4

Fri Jun 17 23:19:16 UTC 2022 Try: 5

Fri Jun 17 23:19:16 UTC 2022 Try: 6

Fri Jun 17 23:19:16 UTC 2022 Try: 7

Fri Jun 17 23:19:21 UTC 2022 Try: 8

Fri Jun 17 23:19:21 UTC 2022 Try: 9

Fri Jun 17 23:19:21 UTC 2022 Try: 10

Fri Jun 17 23:19:21 UTC 2022 Try: 11

Fri Jun 17 23:19:21 UTC 2022 Try: 12

Fri Jun 17 23:19:21 UTC 2022 Try: 13

Fri Jun 17 23:19:26 UTC 2022 Try: 14

Fri Jun 17 23:19:26 UTC 2022 Try: 15

Fri Jun 17 23:19:26 UTC 2022 Try: 16

Fri Jun 17 23:19:26 UTC 2022 Try: 17

Fri Jun 17 23:19:26 UTC 2022 Try: 18

Fri Jun 17 23:19:26 UTC 2022 Try: 19

Fri Jun 17 23:19:31 UTC 2022 Try: 20

Fri Jun 17 23:19:31 UTC 2022 Try: 21

Fri Jun 17 23:19:31 UTC 2022 Try: 22

Fri Jun 17 23:19:31 UTC 2022 Try: 23

Fri Jun 17 23:19:31 UTC 2022 Try: 24

Fri Jun 17 23:19:31 UTC 2022 Try: 25

Fri Jun 17 23:19:36 UTC 2022 Try: 26

Fri Jun 17 23:19:36 UTC 2022 Try: 27

Fri Jun 17 23:19:36 UTC 2022 Try: 28

Fri Jun 17 23:19:36 UTC 2022 Try: 29

Fri Jun 17 23:19:36 UTC 2022 Try: 30

Fri Jun 17 23:19:36 UTC 2022 Try: 31

Fri Jun 17 23:19:41 UTC 2022 Try: 32

Fri Jun 17 23:19:41 UTC 2022 Try: 33

Fri Jun 17 23:19:41 UTC 2022 Try: 34

Fri Jun 17 23:19:41 UTC 2022 Try: 35

Fri Jun 17 23:19:41 UTC 2022 Try: 36

Fri Jun 17 23:19:41 UTC 2022 Try: 37

Fri Jun 17 23:19:46 UTC 2022 Try: 38

Fri Jun 17 23:19:46 UTC 2022 Try: 39

Fri Jun 17 23:19:46 UTC 2022 Try: 40

Fri Jun 17 23:19:46 UTC 2022 Try: 41

Fri Jun 17 23:19:46 UTC 2022 Try: 42

Fri Jun 17 23:19:46 UTC 2022 Try: 43

Fri Jun 17 23:19:51 UTC 2022 Try: 44

Fri Jun 17 23:19:51 UTC 2022 Try: 45

Fri Jun 17 23:19:51 UTC 2022 Try: 46

Fri Jun 17 23:19:51 UTC 2022 Try: 47

Fri Jun 17 23:19:51 UTC 2022 Try: 48

Fri Jun 17 23:19:51 UTC 2022 Try: 49

Fri Jun 17 23:19:56 UTC 2022 Try: 50

Fri Jun 17 23:19:56 UTC 2022 Try: 51

Fri Jun 17 23:19:56 UTC 2022 Try: 52

Fri Jun 17 23:19:56 UTC 2022 Try: 53

Fri Jun 17 23:19:56 UTC 2022 Try: 54

Fri Jun 17 23:19:56 UTC 2022 Try: 55

Fri Jun 17 23:20:01 UTC 2022 Try: 56

Fri Jun 17 23:20:01 UTC 2022 Try: 57

Fri Jun 17 23:20:01 UTC 2022 Try: 58

Fri Jun 17 23:20:01 UTC 2022 Try: 59

Fri Jun 17 23:20:01 UTC 2022 Try: 60

Fri Jun 17 23:20:01 UTC 2022 Try: 61

Fri Jun 17 23:20:06 UTC 2022 Try: 62

Fri Jun 17 23:20:06 UTC 2022 Try: 63

Fri Jun 17 23:20:06 UTC 2022 Try: 64

Fri Jun 17 23:20:06 UTC 2022 Try: 65

Fri Jun 17 23:20:06 UTC 2022 Try: 66

Fri Jun 17 23:20:06 UTC 2022 Try: 67

Fri Jun 17 23:20:11 UTC 2022 Try: 68

Fri Jun 17 23:20:11 UTC 2022 Try: 69

Fri Jun 17 23:20:11 UTC 2022 Try: 70

Fri Jun 17 23:20:11 UTC 2022 Try: 71

Fri Jun 17 23:20:11 UTC 2022 Try: 72

Fri Jun 17 23:20:11 UTC 2022 Try: 73

Fri Jun 17 23:20:16 UTC 2022 Try: 74

Fri Jun 17 23:20:16 UTC 2022 Try: 75

Fri Jun 17 23:20:16 UTC 2022 Try: 76

Fri Jun 17 23:20:16 UTC 2022 Try: 77

Fri Jun 17 23:20:16 UTC 2022 Try: 78

Fri Jun 17 23:20:16 UTC 2022 Try: 79

Fri Jun 17 23:20:21 UTC 2022 Try: 80

Fri Jun 17 23:20:21 UTC 2022 Try: 81

Fri Jun 17 23:20:21 UTC 2022 Try: 82

Fri Jun 17 23:20:21 UTC 2022 Try: 83

Fri Jun 17 23:20:21 UTC 2022 Try: 84

Fri Jun 17 23:20:21 UTC 2022 Try: 85

Fri Jun 17 23:20:26 UTC 2022 Try: 86

Fri Jun 17 23:20:26 UTC 2022 Try: 87

Fri Jun 17 23:20:26 UTC 2022 Try: 88

Fri Jun 17 23:20:26 UTC 2022 Try: 89

Fri Jun 17 23:20:26 UTC 2022 Try: 90

Fri Jun 17 23:20:26 UTC 2022 Try: 91

Fri Jun 17 23:20:31 UTC 2022 Try: 92

Fri Jun 17 23:20:31 UTC 2022 Try: 93

Fri Jun 17 23:20:31 UTC 2022 Try: 94

Fri Jun 17 23:20:31 UTC 2022 Try: 95

Fri Jun 17 23:20:31 UTC 2022 Try: 96

Fri Jun 17 23:20:31 UTC 2022 Try: 97

Jun 17 23:20:32.584: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002953500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc002953500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc002953500, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-435".
STEP: Found 8 events.
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:16 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:16 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 312.970356ms
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:16 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:16 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:20 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:21 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 478.849756ms
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:21 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Jun 17 23:20:32.588: INFO: At 2022-06-17 23:19:22 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Jun 17 23:20:32.591: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Jun 17 23:20:32.591: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:14 +0000 UTC  }]
Jun 17 23:20:32.591: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:18 +0000 UTC  }]
Jun 17 23:20:32.591: INFO: 
Jun 17 23:20:32.595: INFO: 
Logging node info for node master1
Jun 17 23:20:32.597: INFO: Node Info: &Node{ObjectMeta:{master1    47691bb2-4ee9-4386-8bec-0f9db1917afd 73870 0 2022-06-17 19:59:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:23 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:23 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:23 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:20:23 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:20:32.598: INFO: 
Logging kubelet events for node master1
Jun 17 23:20:32.600: INFO: 
Logging pods the kubelet thinks is on node master1
Jun 17 23:20:32.624: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun 17 23:20:32.624: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:20:32.624: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container docker-registry ready: true, restart count 0
Jun 17 23:20:32.624: INFO: 	Container nginx ready: true, restart count 0
Jun 17 23:20:32.624: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.624: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:20:32.624: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:20:32.624: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:20:32.624: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:20:32.624: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:20:32.624: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.624: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:20:32.711: INFO: 
Latency metrics for node master1
Jun 17 23:20:32.711: INFO: 
Logging node info for node master2
Jun 17 23:20:32.714: INFO: Node Info: &Node{ObjectMeta:{master2    71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 74034 0 2022-06-17 19:59:29 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:20:32.714: INFO: 
Logging kubelet events for node master2
Jun 17 23:20:32.716: INFO: 
Logging pods the kubelet thinks is on node master2
Jun 17 23:20:32.732: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:20:32.732: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:20:32.732: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:20:32.732: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:20:32.732: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container nfd-controller ready: true, restart count 0
Jun 17 23:20:32.732: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.732: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:20:32.732: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:20:32.732: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:20:32.732: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:20:32.732: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:20:32.732: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.732: INFO: 	Container autoscaler ready: true, restart count 1
Jun 17 23:20:32.832: INFO: 
Latency metrics for node master2
Jun 17 23:20:32.832: INFO: 
Logging node info for node master3
Jun 17 23:20:32.835: INFO: Node Info: &Node{ObjectMeta:{master3    4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 74004 0 2022-06-17 19:59:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:20:30 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:20:32.835: INFO: 
Logging kubelet events for node master3
Jun 17 23:20:32.837: INFO: 
Logging pods the kubelet thinks is on node master3
Jun 17 23:20:32.851: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.851: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:20:32.851: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.851: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:20:32.851: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.851: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.851: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun 17 23:20:32.851: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:20:32.851: INFO: 	Init container install-cni ready: true, restart count 0
Jun 17 23:20:32.851: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:20:32.851: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.852: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:20:32.852: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.852: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.852: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:20:32.852: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.852: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:20:32.852: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.852: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:20:32.852: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.852: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:20:32.947: INFO: 
Latency metrics for node master3
Jun 17 23:20:32.947: INFO: 
Logging node info for node node1
Jun 17 23:20:32.950: INFO: Node Info: &Node{ObjectMeta:{node1    2db3a28c-448f-4511-9db8-4ef739b681b1 73916 0 2022-06-17 20:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:26 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:20:26 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:20:32.951: INFO: 
Logging kubelet events for node node1
Jun 17 23:20:32.954: INFO: 
Logging pods the kubelet thinks is on node node1
Jun 17 23:20:32.977: INFO: service-headless-xrtsw started at 2022-06-17 23:19:03 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container service-headless ready: true, restart count 0
Jun 17 23:20:32.977: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun 17 23:20:32.977: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:20:32.977: INFO: netserver-0 started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:20:32.977: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:20:32.977: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:20:32.977: INFO: service-headless-toggled-s6fjx started at 2022-06-17 23:19:24 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container service-headless-toggled ready: true, restart count 0
Jun 17 23:20:32.977: INFO: netserver-0 started at 2022-06-17 23:20:27 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container webserver ready: false, restart count 0
Jun 17 23:20:32.977: INFO: verify-service-up-exec-pod-nq2xg started at 2022-06-17 23:20:30 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container agnhost-container ready: false, restart count 0
Jun 17 23:20:32.977: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:20:32.977: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container tas-extender ready: true, restart count 0
Jun 17 23:20:32.977: INFO: nodeport-update-service-k4mst started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 17 23:20:32.977: INFO: up-down-3-t4l2k started at 2022-06-17 23:20:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container up-down-3 ready: true, restart count 0
Jun 17 23:20:32.977: INFO: startup-script started at 2022-06-17 23:19:36 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container startup-script ready: true, restart count 0
Jun 17 23:20:32.977: INFO: up-down-3-l2f57 started at 2022-06-17 23:20:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container up-down-3 ready: true, restart count 0
Jun 17 23:20:32.977: INFO: externalip-test-h8hq5 started at 2022-06-17 23:20:26 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container externalip-test ready: true, restart count 0
Jun 17 23:20:32.977: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
Jun 17 23:20:32.977: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container config-reloader ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container grafana ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container prometheus ready: true, restart count 1
Jun 17 23:20:32.977: INFO: up-down-2-d6d2w started at 2022-06-17 23:19:21 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container up-down-2 ready: true, restart count 0
Jun 17 23:20:32.977: INFO: execpod7j5pc started at 2022-06-17 23:20:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:20:32.977: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:20:32.977: INFO: netserver-0 started at 2022-06-17 23:20:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container webserver ready: false, restart count 0
Jun 17 23:20:32.977: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.977: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:20:32.977: INFO: netserver-0 started at 2022-06-17 23:20:16 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container webserver ready: false, restart count 0
Jun 17 23:20:32.977: INFO: pod-client started at 2022-06-17 23:19:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container pod-client ready: true, restart count 0
Jun 17 23:20:32.977: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:20:32.977: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:20:32.977: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:32.977: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:20:32.977: INFO: pod-client started at 2022-06-17 23:20:11 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container pod-client ready: true, restart count 0
Jun 17 23:20:32.977: INFO: e2e-net-exec started at 2022-06-17 23:19:32 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container e2e-net-exec ready: false, restart count 0
Jun 17 23:20:32.977: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:20:32.977: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:20:32.977: INFO: 	Container init ready: false, restart count 0
Jun 17 23:20:32.977: INFO: 	Container install ready: false, restart count 0
Jun 17 23:20:34.210: INFO: 
Latency metrics for node node1
Jun 17 23:20:34.210: INFO: 
Logging node info for node node2
Jun 17 23:20:34.212: INFO: Node Info: &Node{ObjectMeta:{node2    467d2582-10f7-475b-9f20-5b7c2e46267a 74074 0 2022-06-17 20:00:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-17 23:05:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:31 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:31 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:20:31 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:20:31 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:20:34.213: INFO: 
Logging kubelet events for node node2
Jun 17 23:20:34.215: INFO: 
Logging pods the kubelet thinks is on node node2
Jun 17 23:20:34.232: INFO: netserver-1 started at 2022-06-17 23:20:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:20:34.232: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:20:34.232: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:20:34.232: INFO: 	Container init ready: false, restart count 0
Jun 17 23:20:34.232: INFO: 	Container install ready: false, restart count 0
Jun 17 23:20:34.232: INFO: service-headless-pskjv started at 2022-06-17 23:19:03 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container service-headless ready: true, restart count 0
Jun 17 23:20:34.232: INFO: pod-server-1 started at 2022-06-17 23:19:18 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:20:34.232: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:20:34.232: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:20:34.233: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:20:34.233: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:20:34.233: INFO: netserver-1 started at 2022-06-17 23:20:16 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container webserver ready: false, restart count 0
Jun 17 23:20:34.233: INFO: boom-server started at 2022-06-17 23:19:26 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container boom-server ready: true, restart count 0
Jun 17 23:20:34.233: INFO: nodeport-update-service-9nsxw started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 17 23:20:34.233: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:20:34.233: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:20:34.233: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:20:34.233: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:20:34.233: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:20:34.233: INFO: execpodt7kdj started at 2022-06-17 23:20:32 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container agnhost-container ready: false, restart count 0
Jun 17 23:20:34.233: INFO: pod-server-2 started at 2022-06-17 23:20:30 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container agnhost-container ready: false, restart count 0
Jun 17 23:20:34.233: INFO: service-headless-toggled-grrmp started at 2022-06-17 23:19:24 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container service-headless-toggled ready: true, restart count 0
Jun 17 23:20:34.233: INFO: verify-service-down-host-exec-pod started at 2022-06-17 23:20:30 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:20:34.233: INFO: up-down-3-krqb8 started at 2022-06-17 23:20:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container up-down-3 ready: true, restart count 0
Jun 17 23:20:34.233: INFO: service-headless-rjpnc started at 2022-06-17 23:19:03 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container service-headless ready: true, restart count 0
Jun 17 23:20:34.233: INFO: netserver-1 started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:20:34.233: INFO: up-down-2-r6tc7 started at 2022-06-17 23:19:21 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container up-down-2 ready: true, restart count 0
Jun 17 23:20:34.233: INFO: netserver-1 started at 2022-06-17 23:20:27 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container webserver ready: false, restart count 0
Jun 17 23:20:34.233: INFO: up-down-2-htckl started at 2022-06-17 23:19:21 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container up-down-2 ready: true, restart count 0
Jun 17 23:20:34.233: INFO: externalip-test-5fwtv started at 2022-06-17 23:20:26 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container externalip-test ready: true, restart count 0
Jun 17 23:20:34.233: INFO: verify-service-up-host-exec-pod started at 2022-06-17 23:20:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:20:34.233: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun 17 23:20:34.233: INFO: service-headless-toggled-x9tgf started at 2022-06-17 23:19:24 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container service-headless-toggled ready: true, restart count 0
Jun 17 23:20:34.233: INFO: pod-server-1 started at 2022-06-17 23:20:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:20:34.233: INFO: test-container-pod started at 2022-06-17 23:20:20 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:20:34.233: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:20:34.233: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:20:34.233: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:20:34.233: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:20:34.233: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:20:34.233: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:20:34.982: INFO: 
Latency metrics for node node2
Jun 17 23:20:34.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-435" for this suite.


• Failure [80.536 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Jun 17 23:20:32.584: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":645,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:03.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0617 23:19:03.039699      40 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:03.040: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:03.041: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-3204
STEP: creating service service-headless in namespace services-3204
STEP: creating replication controller service-headless in namespace services-3204
I0617 23:19:03.052810      40 runners.go:190] Created replication controller with name: service-headless, namespace: services-3204, replica count: 3
I0617 23:19:06.103358      40 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:09.103966      40 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:12.105401      40 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:15.108450      40 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:18.109424      40 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:21.110379      40 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:24.112425      40 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-3204
STEP: creating service service-headless-toggled in namespace services-3204
STEP: creating replication controller service-headless-toggled in namespace services-3204
I0617 23:19:24.125105      40 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-3204, replica count: 3
I0617 23:19:27.176374      40 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:30.178663      40 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:33.180290      40 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun 17 23:19:33.183: INFO: Creating new host exec pod
Jun 17 23:19:33.201: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:35.207: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:37.204: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:39.205: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:19:39.205: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:19:47.221: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done" in pod services-3204/verify-service-up-host-exec-pod
Jun 17 23:19:47.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done'
Jun 17 23:19:47.967: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n"
Jun 17 23:19:47.968: INFO: stdout: "service-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\n"
Jun 17 23:19:47.968: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done" in pod services-3204/verify-service-up-exec-pod-ltzst
Jun 17 23:19:47.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-up-exec-pod-ltzst -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done'
Jun 17 23:19:48.386: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n"
Jun 17 23:19:48.387: INFO: stdout: "service-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3204
STEP: Deleting pod verify-service-up-exec-pod-ltzst in namespace services-3204
STEP: verifying service-headless is not up
Jun 17 23:19:48.400: INFO: Creating new host exec pod
Jun 17 23:19:48.411: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:50.417: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:52.414: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:19:52.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed'
Jun 17 23:19:54.899: INFO: rc: 28
Jun 17 23:19:54.899: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed" in pod services-3204/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.5.221:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3204
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Jun 17 23:19:54.913: INFO: Creating new host exec pod
Jun 17 23:19:54.925: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:56.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:58.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:00.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:02.928: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:04.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:06.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:08.930: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:10.929: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:12.928: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:14.930: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:20:14.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.211:80 && echo service-down-failed'
Jun 17 23:20:17.804: INFO: rc: 28
Jun 17 23:20:17.804: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.28.211:80 && echo service-down-failed" in pod services-3204/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.211:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.28.211:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3204
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Jun 17 23:20:17.818: INFO: Creating new host exec pod
Jun 17 23:20:17.832: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:19.835: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:21.836: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:23.836: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:25.835: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:20:25.835: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:20:29.854: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done" in pod services-3204/verify-service-up-host-exec-pod
Jun 17 23:20:29.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done'
Jun 17 23:20:30.366: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n"
Jun 17 23:20:30.366: INFO: stdout: "service-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\n"
Jun 17 23:20:30.368: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done" in pod services-3204/verify-service-up-exec-pod-k9ksh
Jun 17 23:20:30.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-up-exec-pod-k9ksh -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.211:80 2>&1 || true; echo; done'
Jun 17 23:20:30.736: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.211:80\n+ echo\n"
Jun 17 23:20:30.736: INFO: stdout: "service-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-x9tgf\nservice-headless-toggled-x9tgf\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-s6fjx\nservice-headless-toggled-grrmp\nservice-headless-toggled-grrmp\nservice-headless-toggled-x9tgf\nservice-headless-toggled-grrmp\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3204
STEP: Deleting pod verify-service-up-exec-pod-k9ksh in namespace services-3204
STEP: verifying service-headless is still not up
Jun 17 23:20:30.749: INFO: Creating new host exec pod
Jun 17 23:20:30.761: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:32.764: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:34.768: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:20:34.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed'
Jun 17 23:20:37.412: INFO: rc: 28
Jun 17 23:20:37.412: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed" in pod services-3204/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3204 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.221:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.5.221:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3204
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:37.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3204" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:94.408 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":72,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:37.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Jun 17 23:20:37.818: INFO: (0) /api/v1/nodes/node2/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-1512
STEP: creating replication controller externalip-test in namespace services-1512
I0617 23:20:26.830672      28 runners.go:190] Created replication controller with name: externalip-test, namespace: services-1512, replica count: 2
I0617 23:20:29.883097      28 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:32.883902      28 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 17 23:20:32.883: INFO: Creating new exec pod
Jun 17 23:20:39.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1512 exec execpodt7kdj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Jun 17 23:20:40.243: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Jun 17 23:20:40.243: INFO: stdout: "externalip-test-5fwtv"
Jun 17 23:20:40.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1512 exec execpodt7kdj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.53.55 80'
Jun 17 23:20:40.746: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.53.55 80\nConnection to 10.233.53.55 80 port [tcp/http] succeeded!\n"
Jun 17 23:20:40.746: INFO: stdout: "externalip-test-h8hq5"
Jun 17 23:20:40.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1512 exec execpodt7kdj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jun 17 23:20:41.386: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jun 17 23:20:41.386: INFO: stdout: "externalip-test-5fwtv"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:41.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1512" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:14.598 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":1,"skipped":587,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:41.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:41.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3043" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":2,"skipped":706,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:42.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Jun 17 23:20:42.030: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:42.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-2241" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:13.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-2289
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:13.997: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:14.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:16.037: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:18.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:20.041: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:22.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:24.042: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:26.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:28.039: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:30.042: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:32.040: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:34.039: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:34.044: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:20:36.047: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:42.081: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:42.081: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:42.088: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:42.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2289" for this suite.


S [SKIPPING] [28.225 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:26.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Jun 17 23:19:26.142: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:28.146: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:30.148: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:32.147: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:34.145: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:36.147: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Jun 17 23:19:36.167: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:38.172: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:40.172: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:42.171: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:44.171: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Jun 17 23:20:44.494: INFO: boom-server pod logs: 2022/06/17 23:19:32 external ip: 10.244.3.104
2022/06/17 23:19:32 listen on 0.0.0.0:9000
2022/06/17 23:19:32 probing 10.244.3.104
2022/06/17 23:19:44 tcp packet: &{SrcPort:39714 DestPort:9000 Seq:1195181410 Ack:0 Flags:40962 WindowSize:29200 Checksum:43798 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:44 tcp packet: &{SrcPort:39714 DestPort:9000 Seq:1195181411 Ack:3259246137 Flags:32784 WindowSize:229 Checksum:28668 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:44 connection established
2022/06/17 23:19:44 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 155 34 194 66 159 153 71 61 5 99 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:44 checksumer: &{sum:427208 oddByte:33 length:39}
2022/06/17 23:19:44 ret:  427241
2022/06/17 23:19:44 ret:  34031
2022/06/17 23:19:44 ret:  34031
2022/06/17 23:19:44 boom packet injected
2022/06/17 23:19:44 tcp packet: &{SrcPort:39714 DestPort:9000 Seq:1195181411 Ack:3259246137 Flags:32785 WindowSize:229 Checksum:28667 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:46 tcp packet: &{SrcPort:45281 DestPort:9000 Seq:2397573587 Ack:0 Flags:40962 WindowSize:29200 Checksum:14699 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:46 tcp packet: &{SrcPort:45281 DestPort:9000 Seq:2397573588 Ack:2470485398 Flags:32784 WindowSize:229 Checksum:44583 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:46 connection established
2022/06/17 23:19:46 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 176 225 147 63 22 246 142 232 17 212 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:46 checksumer: &{sum:571768 oddByte:33 length:39}
2022/06/17 23:19:46 ret:  571801
2022/06/17 23:19:46 ret:  47521
2022/06/17 23:19:46 ret:  47521
2022/06/17 23:19:46 boom packet injected
2022/06/17 23:19:46 tcp packet: &{SrcPort:45281 DestPort:9000 Seq:2397573588 Ack:2470485398 Flags:32785 WindowSize:229 Checksum:44582 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:48 tcp packet: &{SrcPort:38740 DestPort:9000 Seq:4084423303 Ack:0 Flags:40962 WindowSize:29200 Checksum:40424 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:48 tcp packet: &{SrcPort:38740 DestPort:9000 Seq:4084423304 Ack:3464301183 Flags:32784 WindowSize:229 Checksum:25263 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:48 connection established
2022/06/17 23:19:48 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 151 84 206 123 131 223 243 115 90 136 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:48 checksumer: &{sum:496053 oddByte:33 length:39}
2022/06/17 23:19:48 ret:  496086
2022/06/17 23:19:48 ret:  37341
2022/06/17 23:19:48 ret:  37341
2022/06/17 23:19:48 boom packet injected
2022/06/17 23:19:48 tcp packet: &{SrcPort:38740 DestPort:9000 Seq:4084423304 Ack:3464301183 Flags:32785 WindowSize:229 Checksum:25262 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:50 tcp packet: &{SrcPort:35939 DestPort:9000 Seq:1252159103 Ack:0 Flags:40962 WindowSize:29200 Checksum:13794 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:50 tcp packet: &{SrcPort:35939 DestPort:9000 Seq:1252159104 Ack:103485860 Flags:32784 WindowSize:229 Checksum:46085 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:50 connection established
2022/06/17 23:19:50 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 140 99 6 41 139 4 74 162 110 128 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:50 checksumer: &{sum:432469 oddByte:33 length:39}
2022/06/17 23:19:50 ret:  432502
2022/06/17 23:19:50 ret:  39292
2022/06/17 23:19:50 ret:  39292
2022/06/17 23:19:50 boom packet injected
2022/06/17 23:19:50 tcp packet: &{SrcPort:35939 DestPort:9000 Seq:1252159104 Ack:103485860 Flags:32785 WindowSize:229 Checksum:46084 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:52 tcp packet: &{SrcPort:33153 DestPort:9000 Seq:1073378002 Ack:0 Flags:40962 WindowSize:29200 Checksum:16201 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:52 tcp packet: &{SrcPort:33153 DestPort:9000 Seq:1073378003 Ack:1011818376 Flags:32784 WindowSize:229 Checksum:29076 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:52 connection established
2022/06/17 23:19:52 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 129 129 60 77 152 232 63 250 114 211 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:52 checksumer: &{sum:551558 oddByte:33 length:39}
2022/06/17 23:19:52 ret:  551591
2022/06/17 23:19:52 ret:  27311
2022/06/17 23:19:52 ret:  27311
2022/06/17 23:19:52 boom packet injected
2022/06/17 23:19:52 tcp packet: &{SrcPort:33153 DestPort:9000 Seq:1073378003 Ack:1011818376 Flags:32785 WindowSize:229 Checksum:29075 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:54 tcp packet: &{SrcPort:39714 DestPort:9000 Seq:1195181412 Ack:3259246138 Flags:32784 WindowSize:229 Checksum:8665 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:54 tcp packet: &{SrcPort:37462 DestPort:9000 Seq:1002724371 Ack:0 Flags:40962 WindowSize:29200 Checksum:16792 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:54 tcp packet: &{SrcPort:37462 DestPort:9000 Seq:1002724372 Ack:1562594235 Flags:32784 WindowSize:229 Checksum:7947 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:54 connection established
2022/06/17 23:19:54 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 146 86 93 33 197 27 59 196 92 20 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:54 checksumer: &{sum:414155 oddByte:33 length:39}
2022/06/17 23:19:54 ret:  414188
2022/06/17 23:19:54 ret:  20978
2022/06/17 23:19:54 ret:  20978
2022/06/17 23:19:54 boom packet injected
2022/06/17 23:19:54 tcp packet: &{SrcPort:37462 DestPort:9000 Seq:1002724372 Ack:1562594235 Flags:32785 WindowSize:229 Checksum:7946 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:56 tcp packet: &{SrcPort:45281 DestPort:9000 Seq:2397573589 Ack:2470485399 Flags:32784 WindowSize:229 Checksum:24581 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:56 tcp packet: &{SrcPort:44669 DestPort:9000 Seq:291566063 Ack:0 Flags:40962 WindowSize:29200 Checksum:45607 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:56 tcp packet: &{SrcPort:44669 DestPort:9000 Seq:291566064 Ack:4207147154 Flags:32784 WindowSize:229 Checksum:15697 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:56 connection established
2022/06/17 23:19:56 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 174 125 250 194 113 242 17 96 241 240 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:56 checksumer: &{sum:551323 oddByte:33 length:39}
2022/06/17 23:19:56 ret:  551356
2022/06/17 23:19:56 ret:  27076
2022/06/17 23:19:56 ret:  27076
2022/06/17 23:19:56 boom packet injected
2022/06/17 23:19:56 tcp packet: &{SrcPort:44669 DestPort:9000 Seq:291566064 Ack:4207147154 Flags:32785 WindowSize:229 Checksum:15696 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:58 tcp packet: &{SrcPort:38740 DestPort:9000 Seq:4084423305 Ack:3464301184 Flags:32784 WindowSize:229 Checksum:5260 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:58 tcp packet: &{SrcPort:39677 DestPort:9000 Seq:3710897980 Ack:0 Flags:40962 WindowSize:29200 Checksum:5307 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:19:58 tcp packet: &{SrcPort:39677 DestPort:9000 Seq:3710897981 Ack:1590506063 Flags:32784 WindowSize:229 Checksum:64078 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:19:58 connection established
2022/06/17 23:19:58 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 154 253 94 203 171 175 221 47 207 61 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:19:58 checksumer: &{sum:510927 oddByte:33 length:39}
2022/06/17 23:19:58 ret:  510960
2022/06/17 23:19:58 ret:  52215
2022/06/17 23:19:58 ret:  52215
2022/06/17 23:19:58 boom packet injected
2022/06/17 23:19:58 tcp packet: &{SrcPort:39677 DestPort:9000 Seq:3710897981 Ack:1590506063 Flags:32785 WindowSize:229 Checksum:64077 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:00 tcp packet: &{SrcPort:35939 DestPort:9000 Seq:1252159105 Ack:103485861 Flags:32784 WindowSize:229 Checksum:26083 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:00 tcp packet: &{SrcPort:37156 DestPort:9000 Seq:2290453531 Ack:0 Flags:40962 WindowSize:29200 Checksum:44687 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:00 tcp packet: &{SrcPort:37156 DestPort:9000 Seq:2290453532 Ack:3345068767 Flags:32784 WindowSize:229 Checksum:41774 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:00 connection established
2022/06/17 23:20:00 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 145 36 199 96 44 63 136 133 140 28 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:00 checksumer: &{sum:412696 oddByte:33 length:39}
2022/06/17 23:20:00 ret:  412729
2022/06/17 23:20:00 ret:  19519
2022/06/17 23:20:00 ret:  19519
2022/06/17 23:20:00 boom packet injected
2022/06/17 23:20:00 tcp packet: &{SrcPort:37156 DestPort:9000 Seq:2290453532 Ack:3345068767 Flags:32785 WindowSize:229 Checksum:41773 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:02 tcp packet: &{SrcPort:33153 DestPort:9000 Seq:1073378004 Ack:1011818377 Flags:32784 WindowSize:229 Checksum:9074 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:02 tcp packet: &{SrcPort:36976 DestPort:9000 Seq:381435745 Ack:0 Flags:40962 WindowSize:29200 Checksum:26101 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:02 tcp packet: &{SrcPort:36976 DestPort:9000 Seq:381435746 Ack:4102091518 Flags:32784 WindowSize:229 Checksum:58756 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:02 connection established
2022/06/17 23:20:02 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 144 112 244 127 108 94 22 188 63 98 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:02 checksumer: &{sum:479941 oddByte:33 length:39}
2022/06/17 23:20:02 ret:  479974
2022/06/17 23:20:02 ret:  21229
2022/06/17 23:20:02 ret:  21229
2022/06/17 23:20:02 boom packet injected
2022/06/17 23:20:02 tcp packet: &{SrcPort:36976 DestPort:9000 Seq:381435746 Ack:4102091518 Flags:32785 WindowSize:229 Checksum:58755 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:04 tcp packet: &{SrcPort:37462 DestPort:9000 Seq:1002724373 Ack:1562594236 Flags:32784 WindowSize:229 Checksum:53480 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:04 tcp packet: &{SrcPort:37278 DestPort:9000 Seq:1482086363 Ack:0 Flags:40962 WindowSize:29200 Checksum:33506 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:04 tcp packet: &{SrcPort:37278 DestPort:9000 Seq:1482086364 Ack:646947763 Flags:32784 WindowSize:229 Checksum:6110 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:04 connection established
2022/06/17 23:20:04 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 145 158 38 142 29 19 88 86 215 220 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:04 checksumer: &{sum:481411 oddByte:33 length:39}
2022/06/17 23:20:04 ret:  481444
2022/06/17 23:20:04 ret:  22699
2022/06/17 23:20:04 ret:  22699
2022/06/17 23:20:04 boom packet injected
2022/06/17 23:20:04 tcp packet: &{SrcPort:37278 DestPort:9000 Seq:1482086364 Ack:646947763 Flags:32785 WindowSize:229 Checksum:6109 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:06 tcp packet: &{SrcPort:44669 DestPort:9000 Seq:291566065 Ack:4207147155 Flags:32784 WindowSize:229 Checksum:61230 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:06 tcp packet: &{SrcPort:45197 DestPort:9000 Seq:3303463399 Ack:0 Flags:40962 WindowSize:29200 Checksum:60806 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:06 tcp packet: &{SrcPort:45197 DestPort:9000 Seq:3303463400 Ack:818115073 Flags:32784 WindowSize:229 Checksum:41520 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:06 connection established
2022/06/17 23:20:06 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 176 141 48 193 235 97 196 230 217 232 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:06 checksumer: &{sum:550376 oddByte:33 length:39}
2022/06/17 23:20:06 ret:  550409
2022/06/17 23:20:06 ret:  26129
2022/06/17 23:20:06 ret:  26129
2022/06/17 23:20:06 boom packet injected
2022/06/17 23:20:06 tcp packet: &{SrcPort:45197 DestPort:9000 Seq:3303463400 Ack:818115073 Flags:32785 WindowSize:229 Checksum:41519 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:08 tcp packet: &{SrcPort:39677 DestPort:9000 Seq:3710897982 Ack:1590506064 Flags:32784 WindowSize:229 Checksum:44074 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:08 tcp packet: &{SrcPort:33686 DestPort:9000 Seq:2580727968 Ack:0 Flags:40962 WindowSize:29200 Checksum:20232 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:08 tcp packet: &{SrcPort:33686 DestPort:9000 Seq:2580727969 Ack:2287113040 Flags:32784 WindowSize:229 Checksum:33538 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:08 connection established
2022/06/17 23:20:08 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 131 150 136 81 12 176 153 210 200 161 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:08 checksumer: &{sum:520696 oddByte:33 length:39}
2022/06/17 23:20:08 ret:  520729
2022/06/17 23:20:08 ret:  61984
2022/06/17 23:20:08 ret:  61984
2022/06/17 23:20:08 boom packet injected
2022/06/17 23:20:08 tcp packet: &{SrcPort:33686 DestPort:9000 Seq:2580727969 Ack:2287113040 Flags:32785 WindowSize:229 Checksum:33537 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:10 tcp packet: &{SrcPort:37156 DestPort:9000 Seq:2290453533 Ack:3345068768 Flags:32784 WindowSize:229 Checksum:21771 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:10 tcp packet: &{SrcPort:45906 DestPort:9000 Seq:1647797364 Ack:0 Flags:40962 WindowSize:29200 Checksum:45890 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:10 tcp packet: &{SrcPort:45906 DestPort:9000 Seq:1647797365 Ack:577674164 Flags:32784 WindowSize:229 Checksum:15596 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:10 connection established
2022/06/17 23:20:10 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 179 82 34 109 21 20 98 55 100 117 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:10 checksumer: &{sum:419376 oddByte:33 length:39}
2022/06/17 23:20:10 ret:  419409
2022/06/17 23:20:10 ret:  26199
2022/06/17 23:20:10 ret:  26199
2022/06/17 23:20:10 boom packet injected
2022/06/17 23:20:10 tcp packet: &{SrcPort:45906 DestPort:9000 Seq:1647797365 Ack:577674164 Flags:32785 WindowSize:229 Checksum:15595 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:12 tcp packet: &{SrcPort:36976 DestPort:9000 Seq:381435747 Ack:4102091519 Flags:32784 WindowSize:229 Checksum:38753 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:12 tcp packet: &{SrcPort:41762 DestPort:9000 Seq:2501241588 Ack:0 Flags:40962 WindowSize:29200 Checksum:580 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:12 tcp packet: &{SrcPort:41762 DestPort:9000 Seq:2501241589 Ack:4003007392 Flags:32784 WindowSize:229 Checksum:18436 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:12 connection established
2022/06/17 23:20:12 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 163 34 238 151 133 0 149 21 234 245 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:12 checksumer: &{sum:437269 oddByte:33 length:39}
2022/06/17 23:20:12 ret:  437302
2022/06/17 23:20:12 ret:  44092
2022/06/17 23:20:12 ret:  44092
2022/06/17 23:20:12 boom packet injected
2022/06/17 23:20:12 tcp packet: &{SrcPort:41762 DestPort:9000 Seq:2501241589 Ack:4003007392 Flags:32785 WindowSize:229 Checksum:18435 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:14 tcp packet: &{SrcPort:37278 DestPort:9000 Seq:1482086365 Ack:646947764 Flags:32784 WindowSize:229 Checksum:51643 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:14 tcp packet: &{SrcPort:37898 DestPort:9000 Seq:2947148064 Ack:0 Flags:40962 WindowSize:29200 Checksum:61642 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:14 tcp packet: &{SrcPort:37898 DestPort:9000 Seq:2947148065 Ack:3585770012 Flags:32784 WindowSize:229 Checksum:53533 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:14 connection established
2022/06/17 23:20:14 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 148 10 213 184 251 124 175 169 233 33 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:14 checksumer: &{sum:455036 oddByte:33 length:39}
2022/06/17 23:20:14 ret:  455069
2022/06/17 23:20:14 ret:  61859
2022/06/17 23:20:14 ret:  61859
2022/06/17 23:20:14 boom packet injected
2022/06/17 23:20:14 tcp packet: &{SrcPort:37898 DestPort:9000 Seq:2947148065 Ack:3585770012 Flags:32785 WindowSize:229 Checksum:53532 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:16 tcp packet: &{SrcPort:45197 DestPort:9000 Seq:3303463401 Ack:818115074 Flags:32784 WindowSize:229 Checksum:21517 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:16 tcp packet: &{SrcPort:33815 DestPort:9000 Seq:2351447801 Ack:0 Flags:40962 WindowSize:29200 Checksum:50837 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:16 tcp packet: &{SrcPort:33815 DestPort:9000 Seq:2351447802 Ack:2602951163 Flags:32784 WindowSize:229 Checksum:31182 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:16 connection established
2022/06/17 23:20:16 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 132 23 155 36 91 91 140 40 62 250 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:16 checksumer: &{sum:434116 oddByte:33 length:39}
2022/06/17 23:20:16 ret:  434149
2022/06/17 23:20:16 ret:  40939
2022/06/17 23:20:16 ret:  40939
2022/06/17 23:20:16 boom packet injected
2022/06/17 23:20:16 tcp packet: &{SrcPort:33815 DestPort:9000 Seq:2351447802 Ack:2602951163 Flags:32785 WindowSize:229 Checksum:31181 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:18 tcp packet: &{SrcPort:33686 DestPort:9000 Seq:2580727970 Ack:2287113041 Flags:32784 WindowSize:229 Checksum:13536 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:18 tcp packet: &{SrcPort:40044 DestPort:9000 Seq:3054859773 Ack:0 Flags:40962 WindowSize:29200 Checksum:17790 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:18 tcp packet: &{SrcPort:40044 DestPort:9000 Seq:3054859774 Ack:2959918009 Flags:32784 WindowSize:229 Checksum:63968 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:18 connection established
2022/06/17 23:20:18 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 156 108 176 107 61 25 182 21 117 254 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:18 checksumer: &{sum:453428 oddByte:33 length:39}
2022/06/17 23:20:18 ret:  453461
2022/06/17 23:20:18 ret:  60251
2022/06/17 23:20:18 ret:  60251
2022/06/17 23:20:18 boom packet injected
2022/06/17 23:20:18 tcp packet: &{SrcPort:40044 DestPort:9000 Seq:3054859774 Ack:2959918009 Flags:32785 WindowSize:229 Checksum:63967 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:20 tcp packet: &{SrcPort:45906 DestPort:9000 Seq:1647797366 Ack:577674165 Flags:32784 WindowSize:229 Checksum:61129 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:20 tcp packet: &{SrcPort:37353 DestPort:9000 Seq:2362314032 Ack:0 Flags:40962 WindowSize:29200 Checksum:55877 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:20 tcp packet: &{SrcPort:37353 DestPort:9000 Seq:2362314033 Ack:583565672 Flags:32784 WindowSize:229 Checksum:22221 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:20 connection established
2022/06/17 23:20:20 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 145 233 34 198 250 200 140 206 13 49 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:20 checksumer: &{sum:548294 oddByte:33 length:39}
2022/06/17 23:20:20 ret:  548327
2022/06/17 23:20:20 ret:  24047
2022/06/17 23:20:20 ret:  24047
2022/06/17 23:20:20 boom packet injected
2022/06/17 23:20:20 tcp packet: &{SrcPort:37353 DestPort:9000 Seq:2362314033 Ack:583565672 Flags:32785 WindowSize:229 Checksum:22220 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:22 tcp packet: &{SrcPort:41762 DestPort:9000 Seq:2501241590 Ack:4003007393 Flags:32784 WindowSize:229 Checksum:63969 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:22 tcp packet: &{SrcPort:36261 DestPort:9000 Seq:45941478 Ack:0 Flags:40962 WindowSize:29200 Checksum:27412 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:22 tcp packet: &{SrcPort:36261 DestPort:9000 Seq:45941479 Ack:1247702167 Flags:32784 WindowSize:229 Checksum:52486 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:22 connection established
2022/06/17 23:20:22 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 141 165 74 92 229 247 2 189 2 231 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:22 checksumer: &{sum:557888 oddByte:33 length:39}
2022/06/17 23:20:22 ret:  557921
2022/06/17 23:20:22 ret:  33641
2022/06/17 23:20:22 ret:  33641
2022/06/17 23:20:22 boom packet injected
2022/06/17 23:20:22 tcp packet: &{SrcPort:36261 DestPort:9000 Seq:45941479 Ack:1247702167 Flags:32785 WindowSize:229 Checksum:52485 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:24 tcp packet: &{SrcPort:37898 DestPort:9000 Seq:2947148066 Ack:3585770013 Flags:32784 WindowSize:229 Checksum:33530 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:24 tcp packet: &{SrcPort:45519 DestPort:9000 Seq:2588433073 Ack:0 Flags:40962 WindowSize:29200 Checksum:20418 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:24 tcp packet: &{SrcPort:45519 DestPort:9000 Seq:2588433074 Ack:2289460752 Flags:32784 WindowSize:229 Checksum:29266 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:24 connection established
2022/06/17 23:20:24 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 177 207 136 116 223 112 154 72 90 178 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:24 checksumer: &{sum:497036 oddByte:33 length:39}
2022/06/17 23:20:24 ret:  497069
2022/06/17 23:20:24 ret:  38324
2022/06/17 23:20:24 ret:  38324
2022/06/17 23:20:24 boom packet injected
2022/06/17 23:20:24 tcp packet: &{SrcPort:45519 DestPort:9000 Seq:2588433074 Ack:2289460752 Flags:32785 WindowSize:229 Checksum:29265 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:26 tcp packet: &{SrcPort:33815 DestPort:9000 Seq:2351447803 Ack:2602951164 Flags:32784 WindowSize:229 Checksum:11178 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:26 tcp packet: &{SrcPort:38999 DestPort:9000 Seq:2210767696 Ack:0 Flags:40962 WindowSize:29200 Checksum:12109 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:26 tcp packet: &{SrcPort:38999 DestPort:9000 Seq:2210767697 Ack:2624639491 Flags:32784 WindowSize:229 Checksum:51742 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:26 connection established
2022/06/17 23:20:26 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 152 87 156 111 75 99 131 197 163 81 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:26 checksumer: &{sum:468773 oddByte:33 length:39}
2022/06/17 23:20:26 ret:  468806
2022/06/17 23:20:26 ret:  10061
2022/06/17 23:20:26 ret:  10061
2022/06/17 23:20:26 boom packet injected
2022/06/17 23:20:26 tcp packet: &{SrcPort:38999 DestPort:9000 Seq:2210767697 Ack:2624639491 Flags:32785 WindowSize:229 Checksum:51741 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:28 tcp packet: &{SrcPort:40044 DestPort:9000 Seq:3054859775 Ack:2959918010 Flags:32784 WindowSize:229 Checksum:43966 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:28 tcp packet: &{SrcPort:41946 DestPort:9000 Seq:4000155165 Ack:0 Flags:40962 WindowSize:29200 Checksum:52868 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:28 tcp packet: &{SrcPort:41946 DestPort:9000 Seq:4000155166 Ack:4150817787 Flags:32784 WindowSize:229 Checksum:25750 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:28 connection established
2022/06/17 23:20:28 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 163 218 247 102 237 91 238 109 134 30 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:28 checksumer: &{sum:462715 oddByte:33 length:39}
2022/06/17 23:20:28 ret:  462748
2022/06/17 23:20:28 ret:  4003
2022/06/17 23:20:28 ret:  4003
2022/06/17 23:20:28 boom packet injected
2022/06/17 23:20:28 tcp packet: &{SrcPort:41946 DestPort:9000 Seq:4000155166 Ack:4150817787 Flags:32785 WindowSize:229 Checksum:25749 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:30 tcp packet: &{SrcPort:37353 DestPort:9000 Seq:2362314034 Ack:583565673 Flags:32784 WindowSize:229 Checksum:2218 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:30 tcp packet: &{SrcPort:46636 DestPort:9000 Seq:881352768 Ack:0 Flags:40962 WindowSize:29200 Checksum:37924 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:30 tcp packet: &{SrcPort:46636 DestPort:9000 Seq:881352769 Ack:2349598941 Flags:32784 WindowSize:229 Checksum:63712 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:30 connection established
2022/06/17 23:20:30 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 182 44 140 10 130 61 52 136 96 65 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:30 checksumer: &{sum:402392 oddByte:33 length:39}
2022/06/17 23:20:30 ret:  402425
2022/06/17 23:20:30 ret:  9215
2022/06/17 23:20:30 ret:  9215
2022/06/17 23:20:30 boom packet injected
2022/06/17 23:20:30 tcp packet: &{SrcPort:46636 DestPort:9000 Seq:881352769 Ack:2349598941 Flags:32785 WindowSize:229 Checksum:63711 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:32 tcp packet: &{SrcPort:36261 DestPort:9000 Seq:45941480 Ack:1247702168 Flags:32784 WindowSize:229 Checksum:32482 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:32 tcp packet: &{SrcPort:42609 DestPort:9000 Seq:114102458 Ack:0 Flags:40962 WindowSize:29200 Checksum:6481 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:32 tcp packet: &{SrcPort:42609 DestPort:9000 Seq:114102459 Ack:3597972178 Flags:32784 WindowSize:229 Checksum:33246 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:32 connection established
2022/06/17 23:20:32 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 166 113 214 115 44 50 6 205 16 187 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:32 checksumer: &{sum:492862 oddByte:33 length:39}
2022/06/17 23:20:32 ret:  492895
2022/06/17 23:20:32 ret:  34150
2022/06/17 23:20:32 ret:  34150
2022/06/17 23:20:32 boom packet injected
2022/06/17 23:20:32 tcp packet: &{SrcPort:42609 DestPort:9000 Seq:114102459 Ack:3597972178 Flags:32785 WindowSize:229 Checksum:33245 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:34 tcp packet: &{SrcPort:45519 DestPort:9000 Seq:2588433075 Ack:2289460753 Flags:32784 WindowSize:229 Checksum:9263 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:34 tcp packet: &{SrcPort:42507 DestPort:9000 Seq:305293778 Ack:0 Flags:40962 WindowSize:29200 Checksum:44393 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:34 tcp packet: &{SrcPort:42507 DestPort:9000 Seq:305293779 Ack:2926614821 Flags:32784 WindowSize:229 Checksum:20439 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:34 connection established
2022/06/17 23:20:34 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 166 11 174 111 18 133 18 50 105 211 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:34 checksumer: &{sum:453473 oddByte:33 length:39}
2022/06/17 23:20:34 ret:  453506
2022/06/17 23:20:34 ret:  60296
2022/06/17 23:20:34 ret:  60296
2022/06/17 23:20:34 boom packet injected
2022/06/17 23:20:34 tcp packet: &{SrcPort:42507 DestPort:9000 Seq:305293779 Ack:2926614821 Flags:32785 WindowSize:229 Checksum:20438 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:36 tcp packet: &{SrcPort:38999 DestPort:9000 Seq:2210767698 Ack:2624639492 Flags:32784 WindowSize:229 Checksum:31739 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:36 tcp packet: &{SrcPort:42662 DestPort:9000 Seq:4152855149 Ack:0 Flags:40962 WindowSize:29200 Checksum:40716 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:36 tcp packet: &{SrcPort:42662 DestPort:9000 Seq:4152855150 Ack:3674781839 Flags:32784 WindowSize:229 Checksum:60839 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:36 connection established
2022/06/17 23:20:36 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 166 166 219 7 49 239 247 135 138 110 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:36 checksumer: &{sum:489907 oddByte:33 length:39}
2022/06/17 23:20:36 ret:  489940
2022/06/17 23:20:36 ret:  31195
2022/06/17 23:20:36 ret:  31195
2022/06/17 23:20:36 boom packet injected
2022/06/17 23:20:36 tcp packet: &{SrcPort:42662 DestPort:9000 Seq:4152855150 Ack:3674781839 Flags:32785 WindowSize:229 Checksum:60838 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:38 tcp packet: &{SrcPort:41946 DestPort:9000 Seq:4000155167 Ack:4150817788 Flags:32784 WindowSize:229 Checksum:5747 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:38 tcp packet: &{SrcPort:41064 DestPort:9000 Seq:206123202 Ack:0 Flags:40962 WindowSize:29200 Checksum:57954 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:38 tcp packet: &{SrcPort:41064 DestPort:9000 Seq:206123203 Ack:1084804861 Flags:32784 WindowSize:229 Checksum:44318 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:38 connection established
2022/06/17 23:20:38 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 160 104 64 167 72 93 12 73 48 195 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:38 checksumer: &{sum:483044 oddByte:33 length:39}
2022/06/17 23:20:38 ret:  483077
2022/06/17 23:20:38 ret:  24332
2022/06/17 23:20:38 ret:  24332
2022/06/17 23:20:38 boom packet injected
2022/06/17 23:20:38 tcp packet: &{SrcPort:41064 DestPort:9000 Seq:206123203 Ack:1084804861 Flags:32785 WindowSize:229 Checksum:44317 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:40 tcp packet: &{SrcPort:46636 DestPort:9000 Seq:881352770 Ack:2349598942 Flags:32784 WindowSize:229 Checksum:43709 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:40 tcp packet: &{SrcPort:40446 DestPort:9000 Seq:1475399821 Ack:0 Flags:40962 WindowSize:29200 Checksum:61833 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:40 tcp packet: &{SrcPort:40446 DestPort:9000 Seq:1475399822 Ack:976360167 Flags:32784 WindowSize:229 Checksum:30466 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:40 connection established
2022/06/17 23:20:40 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 157 254 58 48 140 71 87 240 208 142 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:40 checksumer: &{sum:514826 oddByte:33 length:39}
2022/06/17 23:20:40 ret:  514859
2022/06/17 23:20:40 ret:  56114
2022/06/17 23:20:40 ret:  56114
2022/06/17 23:20:40 boom packet injected
2022/06/17 23:20:40 tcp packet: &{SrcPort:40446 DestPort:9000 Seq:1475399822 Ack:976360167 Flags:32785 WindowSize:229 Checksum:30465 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:42 tcp packet: &{SrcPort:42609 DestPort:9000 Seq:114102460 Ack:3597972179 Flags:32784 WindowSize:229 Checksum:13243 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:42 tcp packet: &{SrcPort:44403 DestPort:9000 Seq:3867451218 Ack:0 Flags:40962 WindowSize:29200 Checksum:31977 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:42 tcp packet: &{SrcPort:44403 DestPort:9000 Seq:3867451219 Ack:2924627339 Flags:32784 WindowSize:229 Checksum:21451 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:42 connection established
2022/06/17 23:20:42 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 173 115 174 80 190 235 230 132 159 83 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:42 checksumer: &{sum:486942 oddByte:33 length:39}
2022/06/17 23:20:42 ret:  486975
2022/06/17 23:20:42 ret:  28230
2022/06/17 23:20:42 ret:  28230
2022/06/17 23:20:42 boom packet injected
2022/06/17 23:20:42 tcp packet: &{SrcPort:44403 DestPort:9000 Seq:3867451219 Ack:2924627339 Flags:32785 WindowSize:229 Checksum:21450 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:44 tcp packet: &{SrcPort:42507 DestPort:9000 Seq:305293780 Ack:2926614822 Flags:32784 WindowSize:229 Checksum:436 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:44 tcp packet: &{SrcPort:41402 DestPort:9000 Seq:3865860432 Ack:0 Flags:40962 WindowSize:29200 Checksum:50923 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.4.81
2022/06/17 23:20:44 tcp packet: &{SrcPort:41402 DestPort:9000 Seq:3865860433 Ack:2163889486 Flags:32784 WindowSize:229 Checksum:45970 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.4.81
2022/06/17 23:20:44 connection established
2022/06/17 23:20:44 calling checksumTCP: 10.244.3.104 10.244.4.81 [35 40 161 186 128 248 206 174 230 108 89 81 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/06/17 23:20:44 checksumer: &{sum:525742 oddByte:33 length:39}
2022/06/17 23:20:44 ret:  525775
2022/06/17 23:20:44 ret:  1495
2022/06/17 23:20:44 ret:  1495
2022/06/17 23:20:44 boom packet injected
2022/06/17 23:20:44 tcp packet: &{SrcPort:41402 DestPort:9000 Seq:3865860433 Ack:2163889486 Flags:32785 WindowSize:229 Checksum:45969 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.4.81

Jun 17 23:20:44.494: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:44.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8541" for this suite.


• [SLOW TEST:78.402 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":229,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:44.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Jun 17 23:20:44.608: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:44.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7255" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:16.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-9455
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:16.565: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:16.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:18.600: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:20.602: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:22.602: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:24.604: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:26.600: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:28.602: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:30.601: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:32.600: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:34.601: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:36.601: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:36.606: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:20:38.610: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:20:46.630: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:20:46.630: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:46.643: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:46.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9455" for this suite.


S [SKIPPING] [30.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:11.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-6724
STEP: creating a client pod for probing the service svc-udp
Jun 17 23:20:11.575: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:13.578: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:15.581: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:17.580: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:19.579: INFO: The status of Pod pod-client is Running (Ready = true)
Jun 17 23:20:19.596: INFO: Pod client logs: Fri Jun 17 23:20:14 UTC 2022
Fri Jun 17 23:20:14 UTC 2022 Try: 1

Fri Jun 17 23:20:14 UTC 2022 Try: 2

Fri Jun 17 23:20:14 UTC 2022 Try: 3

Fri Jun 17 23:20:14 UTC 2022 Try: 4

Fri Jun 17 23:20:14 UTC 2022 Try: 5

Fri Jun 17 23:20:14 UTC 2022 Try: 6

Fri Jun 17 23:20:14 UTC 2022 Try: 7

Fri Jun 17 23:20:19 UTC 2022 Try: 8

Fri Jun 17 23:20:19 UTC 2022 Try: 9

Fri Jun 17 23:20:19 UTC 2022 Try: 10

Fri Jun 17 23:20:19 UTC 2022 Try: 11

Fri Jun 17 23:20:19 UTC 2022 Try: 12

Fri Jun 17 23:20:19 UTC 2022 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Jun 17 23:20:19.612: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:21.616: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:23.617: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:25.616: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-6724 to expose endpoints map[pod-server-1:[80]]
Jun 17 23:20:25.627: INFO: successfully validated that service svc-udp in namespace conntrack-6724 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Jun 17 23:20:30.665: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:32.667: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:34.668: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:36.667: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Jun 17 23:20:36.669: INFO: Cleaning up pod-server-1 pod
Jun 17 23:20:36.676: INFO: Waiting for pod pod-server-1 to disappear
Jun 17 23:20:36.680: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-6724 to expose endpoints map[pod-server-2:[80]]
Jun 17 23:20:36.687: INFO: successfully validated that service svc-udp in namespace conntrack-6724 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:46.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6724" for this suite.


• [SLOW TEST:35.185 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":1063,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:02.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0617 23:19:02.943575      31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jun 17 23:19:02.943: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Jun 17 23:19:02.945: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-729
STEP: creating service up-down-1 in namespace services-729
STEP: creating replication controller up-down-1 in namespace services-729
I0617 23:19:02.956871      31 runners.go:190] Created replication controller with name: up-down-1, namespace: services-729, replica count: 3
I0617 23:19:06.007396      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:09.008287      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:12.011356      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:15.014085      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:18.014694      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:21.015486      31 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-729
STEP: creating service up-down-2 in namespace services-729
STEP: creating replication controller up-down-2 in namespace services-729
I0617 23:19:21.026799      31 runners.go:190] Created replication controller with name: up-down-2, namespace: services-729, replica count: 3
I0617 23:19:24.078113      31 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:27.079006      31 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:19:30.079822      31 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Jun 17 23:19:30.082: INFO: Creating new host exec pod
Jun 17 23:19:30.096: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:32.099: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:34.100: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:36.099: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:19:36.099: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:19:44.122: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.19.48:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-host-exec-pod
Jun 17 23:19:44.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.19.48:80 2>&1 || true; echo; done'
Jun 17 23:19:44.486: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n"
Jun 17 23:19:44.487: INFO: stdout: "up-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\n"
Jun 17 23:19:44.487: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.19.48:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-exec-pod-pjmr5
Jun 17 23:19:44.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-exec-pod-pjmr5 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.19.48:80 2>&1 || true; echo; done'
Jun 17 23:19:45.712: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.19.48:80\n+ echo\n"
Jun 17 23:19:45.713: INFO: stdout: "up-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-g4kzn\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-fkrgp\nup-down-1-8qfg5\nup-down-1-g4kzn\nup-down-1-8qfg5\nup-down-1-8qfg5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-729
STEP: Deleting pod verify-service-up-exec-pod-pjmr5 in namespace services-729
STEP: verifying service up-down-2 is up
Jun 17 23:19:45.728: INFO: Creating new host exec pod
Jun 17 23:19:45.740: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:19:47.744: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:19:47.744: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:19:51.761: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-host-exec-pod
Jun 17 23:19:51.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:19:52.286: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:19:52.287: INFO: stdout: "up-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\n"
Jun 17 23:19:52.287: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-exec-pod-82n4g
Jun 17 23:19:52.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-exec-pod-82n4g -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:19:52.661: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:19:52.662: INFO: stdout: "up-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-729
STEP: Deleting pod verify-service-up-exec-pod-82n4g in namespace services-729
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-729, will wait for the garbage collector to delete the pods
Jun 17 23:19:52.734: INFO: Deleting ReplicationController up-down-1 took: 4.737855ms
Jun 17 23:19:52.834: INFO: Terminating ReplicationController up-down-1 pods took: 100.216349ms
STEP: verifying service up-down-1 is not up
Jun 17 23:20:00.745: INFO: Creating new host exec pod
Jun 17 23:20:00.799: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:02.802: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:04.803: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:06.804: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:20:06.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.48:80 && echo service-down-failed'
Jun 17 23:20:09.049: INFO: rc: 28
Jun 17 23:20:09.049: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.19.48:80 && echo service-down-failed" in pod services-729/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.48:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.19.48:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-729
STEP: verifying service up-down-2 is still up
Jun 17 23:20:09.055: INFO: Creating new host exec pod
Jun 17 23:20:09.068: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:11.072: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:13.072: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:20:13.072: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:20:19.088: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-host-exec-pod
Jun 17 23:20:19.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:20:19.482: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:20:19.483: INFO: stdout: "up-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\n"
Jun 17 23:20:19.483: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-exec-pod-7dw26
Jun 17 23:20:19.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-exec-pod-7dw26 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:20:19.849: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:20:19.849: INFO: stdout: "up-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-729
STEP: Deleting pod verify-service-up-exec-pod-7dw26 in namespace services-729
STEP: creating service up-down-3 in namespace services-729
STEP: creating service up-down-3 in namespace services-729
STEP: creating replication controller up-down-3 in namespace services-729
I0617 23:20:19.874380      31 runners.go:190] Created replication controller with name: up-down-3, namespace: services-729, replica count: 3
I0617 23:20:22.925800      31 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:25.926638      31 runners.go:190] up-down-3 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:28.929663      31 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Jun 17 23:20:28.932: INFO: Creating new host exec pod
Jun 17 23:20:28.945: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:30.948: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:20:30.948: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:20:34.965: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-host-exec-pod
Jun 17 23:20:34.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:20:35.529: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:20:35.529: INFO: stdout: "up-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\n"
Jun 17 23:20:35.529: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-exec-pod-nq2xg
Jun 17 23:20:35.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-exec-pod-nq2xg -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.44.156:80 2>&1 || true; echo; done'
Jun 17 23:20:36.035: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.44.156:80\n+ echo\n"
Jun 17 23:20:36.036: INFO: stdout: "up-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-d6d2w\nup-down-2-htckl\nup-down-2-r6tc7\nup-down-2-r6tc7\nup-down-2-d6d2w\nup-down-2-r6tc7\nup-down-2-htckl\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-729
STEP: Deleting pod verify-service-up-exec-pod-nq2xg in namespace services-729
STEP: verifying service up-down-3 is up
Jun 17 23:20:36.053: INFO: Creating new host exec pod
Jun 17 23:20:36.068: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:38.071: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:40.075: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:20:40.075: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:20:46.101: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.191:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-host-exec-pod
Jun 17 23:20:46.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.191:80 2>&1 || true; echo; done'
Jun 17 23:20:46.783: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n"
Jun 17 23:20:46.783: INFO: stdout: "up-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\n"
Jun 17 23:20:46.784: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.191:80 2>&1 || true; echo; done" in pod services-729/verify-service-up-exec-pod-wxpj5
Jun 17 23:20:46.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-729 exec verify-service-up-exec-pod-wxpj5 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.191:80 2>&1 || true; echo; done'
Jun 17 23:20:47.515: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.191:80\n+ echo\n"
Jun 17 23:20:47.515: INFO: stdout: "up-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-t4l2k\nup-down-3-krqb8\nup-down-3-l2f57\nup-down-3-t4l2k\nup-down-3-l2f57\nup-down-3-krqb8\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-729
STEP: Deleting pod verify-service-up-exec-pod-wxpj5 in namespace services-729
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:20:47.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-729" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:104.616 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":32,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:27.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-5041
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:27.455: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:27.491: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:29.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:31.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:33.496: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:35.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:37.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:39.496: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:41.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:43.496: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:45.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:47.494: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:49.495: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:49.500: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:20:51.503: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:21:05.545: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:21:05.545: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:21:05.552: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:05.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5041" for this suite.


S [SKIPPING] [38.245 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Jun 17 23:21:05.705: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:37.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-448
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:38.079: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:38.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:40.113: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:42.112: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:44.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:46.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:48.114: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:50.116: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:52.236: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:54.114: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:56.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:58.113: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:00.114: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:21:00.119: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:21:02.124: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:21:10.162: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:21:10.162: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:21:10.170: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:10.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-448" for this suite.


S [SKIPPING] [32.235 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 17 23:21:10.183: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:35.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-9410
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:35.325: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:35.366: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:37.369: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:39.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:41.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:43.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:45.369: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:47.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:49.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:51.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:53.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:55.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:57.370: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:20:57.375: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:20:59.381: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:21:11.403: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:21:11.403: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:21:11.409: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:11.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9410" for this suite.


S [SKIPPING] [36.226 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 17 23:21:11.421: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:47.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Jun 17 23:21:07.091: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Jun 17 23:21:07.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test2lwd9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Jun 17 23:21:07.343: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Jun 17 23:21:07.343: INFO: stdout: "10.244.1.9:47582"
STEP: Verifying the preserved source ip
Jun 17 23:21:07.343: INFO: Waiting up to 2m0s to get response from 10.244.0.5:8080
Jun 17 23:21:07.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test2lwd9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip'
Jun 17 23:21:07.573: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip\n"
Jun 17 23:21:07.573: INFO: stdout: "10.244.1.9:50718"
STEP: Verifying the preserved source ip
Jun 17 23:21:07.573: INFO: Waiting up to 2m0s to get response from 10.244.4.106:8080
Jun 17 23:21:07.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test2lwd9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip'
Jun 17 23:21:07.835: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip\n"
Jun 17 23:21:07.835: INFO: stdout: "10.244.1.9:36388"
STEP: Verifying the preserved source ip
Jun 17 23:21:07.835: INFO: Waiting up to 2m0s to get response from 10.244.3.140:8080
Jun 17 23:21:07.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test2lwd9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip'
Jun 17 23:21:08.100: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip\n"
Jun 17 23:21:08.100: INFO: stdout: "10.244.1.9:41894"
STEP: Verifying the preserved source ip
Jun 17 23:21:08.100: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Jun 17 23:21:08.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test647gz -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Jun 17 23:21:08.366: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Jun 17 23:21:08.366: INFO: stdout: "10.244.2.5:54238"
STEP: Verifying the preserved source ip
Jun 17 23:21:08.366: INFO: Waiting up to 2m0s to get response from 10.244.0.5:8080
Jun 17 23:21:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test647gz -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip'
Jun 17 23:21:08.608: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip\n"
Jun 17 23:21:08.608: INFO: stdout: "10.244.2.5:49422"
STEP: Verifying the preserved source ip
Jun 17 23:21:08.608: INFO: Waiting up to 2m0s to get response from 10.244.4.106:8080
Jun 17 23:21:08.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test647gz -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip'
Jun 17 23:21:08.861: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip\n"
Jun 17 23:21:08.861: INFO: stdout: "10.244.2.5:36582"
STEP: Verifying the preserved source ip
Jun 17 23:21:08.861: INFO: Waiting up to 2m0s to get response from 10.244.3.140:8080
Jun 17 23:21:08.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-test647gz -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip'
Jun 17 23:21:09.091: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip\n"
Jun 17 23:21:09.091: INFO: stdout: "10.244.2.5:40434"
STEP: Verifying the preserved source ip
Jun 17 23:21:09.091: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Jun 17 23:21:09.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testf6wn6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Jun 17 23:21:09.356: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Jun 17 23:21:09.356: INFO: stdout: "10.244.0.5:49846"
STEP: Verifying the preserved source ip
Jun 17 23:21:09.356: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Jun 17 23:21:09.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testf6wn6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Jun 17 23:21:09.590: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Jun 17 23:21:09.591: INFO: stdout: "10.244.0.5:47318"
STEP: Verifying the preserved source ip
Jun 17 23:21:09.591: INFO: Waiting up to 2m0s to get response from 10.244.4.106:8080
Jun 17 23:21:09.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testf6wn6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip'
Jun 17 23:21:09.835: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip\n"
Jun 17 23:21:09.835: INFO: stdout: "10.244.0.5:45156"
STEP: Verifying the preserved source ip
Jun 17 23:21:09.835: INFO: Waiting up to 2m0s to get response from 10.244.3.140:8080
Jun 17 23:21:09.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testf6wn6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip'
Jun 17 23:21:10.097: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip\n"
Jun 17 23:21:10.097: INFO: stdout: "10.244.0.5:46582"
STEP: Verifying the preserved source ip
Jun 17 23:21:10.097: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Jun 17 23:21:10.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testhrdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Jun 17 23:21:10.370: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Jun 17 23:21:10.370: INFO: stdout: "10.244.4.106:36736"
STEP: Verifying the preserved source ip
Jun 17 23:21:10.370: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Jun 17 23:21:10.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testhrdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Jun 17 23:21:10.630: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Jun 17 23:21:10.630: INFO: stdout: "10.244.4.106:41638"
STEP: Verifying the preserved source ip
Jun 17 23:21:10.630: INFO: Waiting up to 2m0s to get response from 10.244.0.5:8080
Jun 17 23:21:10.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testhrdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip'
Jun 17 23:21:11.079: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip\n"
Jun 17 23:21:11.079: INFO: stdout: "10.244.4.106:59082"
STEP: Verifying the preserved source ip
Jun 17 23:21:11.079: INFO: Waiting up to 2m0s to get response from 10.244.3.140:8080
Jun 17 23:21:11.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testhrdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip'
Jun 17 23:21:11.576: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.140:8080/clientip\n"
Jun 17 23:21:11.576: INFO: stdout: "10.244.4.106:42460"
STEP: Verifying the preserved source ip
Jun 17 23:21:11.576: INFO: Waiting up to 2m0s to get response from 10.244.1.9:8080
Jun 17 23:21:11.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testj7v4b -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip'
Jun 17 23:21:11.819: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.9:8080/clientip\n"
Jun 17 23:21:11.819: INFO: stdout: "10.244.3.140:57706"
STEP: Verifying the preserved source ip
Jun 17 23:21:11.819: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Jun 17 23:21:11.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testj7v4b -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Jun 17 23:21:12.146: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Jun 17 23:21:12.146: INFO: stdout: "10.244.3.140:41026"
STEP: Verifying the preserved source ip
Jun 17 23:21:12.146: INFO: Waiting up to 2m0s to get response from 10.244.0.5:8080
Jun 17 23:21:12.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testj7v4b -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip'
Jun 17 23:21:12.519: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.5:8080/clientip\n"
Jun 17 23:21:12.520: INFO: stdout: "10.244.3.140:35404"
STEP: Verifying the preserved source ip
Jun 17 23:21:12.520: INFO: Waiting up to 2m0s to get response from 10.244.4.106:8080
Jun 17 23:21:12.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-181 exec no-snat-testj7v4b -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip'
Jun 17 23:21:13.000: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.106:8080/clientip\n"
Jun 17 23:21:13.000: INFO: stdout: "10.244.3.140:47778"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:13.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-181" for this suite.


• [SLOW TEST:25.999 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":2,"skipped":1217,"failed":0}
Jun 17 23:21:13.010: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:46.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-5092
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:46.925: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:46.962: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:48.965: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:50.965: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:52.966: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:54.967: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:56.966: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:58.966: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:00.965: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:02.966: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:04.972: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:06.965: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:21:06.970: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:21:08.974: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:21:10.973: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:21:17.016: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:21:17.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:21:17.023: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:17.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5092" for this suite.


S [SKIPPING] [30.219 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Jun 17 23:21:17.037: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:47.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-b9ed0b9a-87f5-415d-a2a9-60475e3baa82]
STEP: Verifying pods for RC slow-terminating-unready-pod
Jun 17 23:20:47.629: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Jun 17 23:20:55.647: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-8rnlg]: "NOW: 2022-06-17 23:20:55.648205199 +0000 UTC m=+3.922844970", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-6824.svc.cluster.local
Jun 17 23:20:55.647: INFO: Creating new exec pod
Jun 17 23:21:09.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6824 exec execpod-m9ftg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/'
Jun 17 23:21:09.920: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/\n"
Jun 17 23:21:09.920: INFO: stdout: "NOW: 2022-06-17 23:21:09.912093588 +0000 UTC m=+18.186733359"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6824 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jun 17 23:21:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6824 exec execpod-m9ftg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/; test "$?" -ne "0"'
Jun 17 23:21:16.849: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jun 17 23:21:16.849: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jun 17 23:21:16.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6824 exec execpod-m9ftg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/'
Jun 17 23:21:18.251: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6824.svc.cluster.local:80/\n"
Jun 17 23:21:18.251: INFO: stdout: "NOW: 2022-06-17 23:21:18.243354682 +0000 UTC m=+26.517994453"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-6824
STEP: deleting service tolerate-unready in namespace services-6824
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:18.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6824" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:30.703 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":2,"skipped":56,"failed":0}
Jun 17 23:21:18.300: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:42.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Jun 17 23:20:42.319: INFO: deploying iperf2 server
Jun 17 23:20:42.323: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Jun 17 23:20:42.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Jun 17 23:20:44.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:46.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:48.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:50.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:52.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:54.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63791104842, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun 17 23:20:56.341: INFO: waiting for iperf2 server endpoints
Jun 17 23:20:58.346: INFO: found iperf2 server endpoints
Jun 17 23:20:58.346: INFO: waiting for client pods to be running
Jun 17 23:21:10.351: INFO: all client pods are ready: 2 pods
Jun 17 23:21:10.354: INFO: server pod phase Running
Jun 17 23:21:10.354: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 23:20:42 +0000 UTC Reason: Message:}
Jun 17 23:21:10.354: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 23:20:52 +0000 UTC Reason: Message:}
Jun 17 23:21:10.354: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 23:20:52 +0000 UTC Reason: Message:}
Jun 17 23:21:10.354: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-06-17 23:20:42 +0000 UTC Reason: Message:}
Jun 17 23:21:10.354: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-06-17 23:20:50 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://a2c563d4071f30c24b3a5e2a9e0ca89707aa34959c7811570e28c20ec1af9684 Started:0xc0024e76fc}
Jun 17 23:21:10.354: INFO: found 2 matching client pods
Jun 17 23:21:10.357: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5866 PodName:iperf2-clients-8b9dl ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:10.357: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:10.439: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun 17 23:21:10.439: INFO: iperf version: 
Jun 17 23:21:10.439: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-8b9dl (node node1)
Jun 17 23:21:10.443: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5866 PodName:iperf2-clients-8b9dl ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:10.443: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:25.607: INFO: Exec stderr: ""
Jun 17 23:21:25.607: INFO: output from exec on client pod iperf2-clients-8b9dl (node node1): 
20220617232111.569,10.244.4.111,36322,10.233.9.153,6789,3,0.0-1.0,119537664,956301312
20220617232112.576,10.244.4.111,36322,10.233.9.153,6789,3,1.0-2.0,119013376,952107008
20220617232113.563,10.244.4.111,36322,10.233.9.153,6789,3,2.0-3.0,118358016,946864128
20220617232114.573,10.244.4.111,36322,10.233.9.153,6789,3,3.0-4.0,117571584,940572672
20220617232115.562,10.244.4.111,36322,10.233.9.153,6789,3,4.0-5.0,117571584,940572672
20220617232116.569,10.244.4.111,36322,10.233.9.153,6789,3,5.0-6.0,117571584,940572672
20220617232117.575,10.244.4.111,36322,10.233.9.153,6789,3,6.0-7.0,116523008,932184064
20220617232118.562,10.244.4.111,36322,10.233.9.153,6789,3,7.0-8.0,118095872,944766976
20220617232119.568,10.244.4.111,36322,10.233.9.153,6789,3,8.0-9.0,116785152,934281216
20220617232120.574,10.244.4.111,36322,10.233.9.153,6789,3,9.0-10.0,117964800,943718400
20220617232120.574,10.244.4.111,36322,10.233.9.153,6789,3,0.0-10.0,1178992640,942560993

Jun 17 23:21:25.611: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-5866 PodName:iperf2-clients-c8f2p ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:25.611: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:25.695: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Jun 17 23:21:25.695: INFO: iperf version: 
Jun 17 23:21:25.695: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-c8f2p (node node2)
Jun 17 23:21:25.699: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-5866 PodName:iperf2-clients-c8f2p ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:25.699: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:40.826: INFO: Exec stderr: ""
Jun 17 23:21:40.826: INFO: output from exec on client pod iperf2-clients-c8f2p (node node2): 
20220617232126.800,10.244.3.143,54620,10.233.9.153,6789,3,0.0-1.0,3480748032,27845984256
20220617232127.790,10.244.3.143,54620,10.233.9.153,6789,3,1.0-2.0,3481403392,27851227136
20220617232128.801,10.244.3.143,54620,10.233.9.153,6789,3,2.0-3.0,3448373248,27586985984
20220617232129.788,10.244.3.143,54620,10.233.9.153,6789,3,3.0-4.0,3384672256,27077378048
20220617232130.795,10.244.3.143,54620,10.233.9.153,6789,3,4.0-5.0,3346661376,26773291008
20220617232131.782,10.244.3.143,54620,10.233.9.153,6789,3,5.0-6.0,3426484224,27411873792
20220617232132.790,10.244.3.143,54620,10.233.9.153,6789,3,6.0-7.0,3462266880,27698135040
20220617232133.797,10.244.3.143,54620,10.233.9.153,6789,3,7.0-8.0,3433431040,27467448320
20220617232134.784,10.244.3.143,54620,10.233.9.153,6789,3,8.0-9.0,3392929792,27143438336
20220617232135.791,10.244.3.143,54620,10.233.9.153,6789,3,9.0-10.0,3371171840,26969374720
20220617232135.791,10.244.3.143,54620,10.233.9.153,6789,3,0.0-10.0,34228142080,27382431516

Jun 17 23:21:40.826: INFO:                                From                                 To    Bandwidth (MB/s)
Jun 17 23:21:40.826: INFO:                               node1                              node2                 112
Jun 17 23:21:40.826: INFO:                               node2                              node2                3264
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:40.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-5866" for this suite.


• [SLOW TEST:58.545 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":3,"skipped":883,"failed":0}
Jun 17 23:21:40.839: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:42.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-237
STEP: creating service service-proxy-disabled in namespace services-237
STEP: creating replication controller service-proxy-disabled in namespace services-237
I0617 23:20:42.171205      35 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-237, replica count: 3
I0617 23:20:45.222848      35 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:48.223107      35 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:51.224169      35 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:54.226140      35 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-237
STEP: creating service service-proxy-toggled in namespace services-237
STEP: creating replication controller service-proxy-toggled in namespace services-237
I0617 23:20:54.238886      35 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-237, replica count: 3
I0617 23:20:57.291406      35 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:21:00.291985      35 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:21:03.292251      35 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:21:06.292670      35 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:21:09.292941      35 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Jun 17 23:21:09.295: INFO: Creating new host exec pod
Jun 17 23:21:09.311: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:11.315: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:13.315: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:21:13.315: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:21:19.345: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done" in pod services-237/verify-service-up-host-exec-pod
Jun 17 23:21:19.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done'
Jun 17 23:21:20.503: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n"
Jun 17 23:21:20.503: INFO: stdout: "service-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\n"
Jun 17 23:21:20.504: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done" in pod services-237/verify-service-up-exec-pod-wvk8c
Jun 17 23:21:20.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-up-exec-pod-wvk8c -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done'
Jun 17 23:21:20.889: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n"
Jun 17 23:21:20.890: INFO: stdout: "service-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-237
STEP: Deleting pod verify-service-up-exec-pod-wvk8c in namespace services-237
STEP: verifying service-disabled is not up
Jun 17 23:21:20.903: INFO: Creating new host exec pod
Jun 17 23:21:20.915: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:22.918: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:21:22.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed'
Jun 17 23:21:25.166: INFO: rc: 28
Jun 17 23:21:25.166: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed" in pod services-237/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.41.65:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-237
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun 17 23:21:25.179: INFO: Creating new host exec pod
Jun 17 23:21:25.192: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:27.196: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:29.195: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:21:29.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.189:80 && echo service-down-failed'
Jun 17 23:21:31.525: INFO: rc: 28
Jun 17 23:21:31.525: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.5.189:80 && echo service-down-failed" in pod services-237/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.5.189:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.5.189:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-237
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun 17 23:21:31.545: INFO: Creating new host exec pod
Jun 17 23:21:31.558: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:33.562: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:35.563: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Jun 17 23:21:35.563: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Jun 17 23:21:37.582: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done" in pod services-237/verify-service-up-host-exec-pod
Jun 17 23:21:37.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done'
Jun 17 23:21:38.009: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n"
Jun 17 23:21:38.009: INFO: stdout: "service-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\n"
Jun 17 23:21:38.010: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done" in pod services-237/verify-service-up-exec-pod-6wj6z
Jun 17 23:21:38.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-up-exec-pod-6wj6z -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.5.189:80 2>&1 || true; echo; done'
Jun 17 23:21:38.424: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.5.189:80\n+ echo\n"
Jun 17 23:21:38.424: INFO: stdout: "service-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-gfwzm\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-8mb9z\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-t7v4w\nservice-proxy-toggled-gfwzm\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-237
STEP: Deleting pod verify-service-up-exec-pod-6wj6z in namespace services-237
STEP: verifying service-disabled is still not up
Jun 17 23:21:38.439: INFO: Creating new host exec pod
Jun 17 23:21:38.452: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:40.457: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:21:42.457: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jun 17 23:21:42.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed'
Jun 17 23:21:44.706: INFO: rc: 28
Jun 17 23:21:44.706: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed" in pod services-237/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-237 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.41.65:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.41.65:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-237
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jun 17 23:21:44.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-237" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:62.581 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":3,"skipped":895,"failed":0}
Jun 17 23:21:44.727: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:19:56.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-3125
Jun 17 23:19:56.248: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-3125
I0617 23:19:56.263485      26 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-3125, replica count: 2
I0617 23:19:59.315420      26 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:02.317631      26 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:05.320608      26 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0617 23:20:08.322367      26 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jun 17 23:20:08.322: INFO: Creating new exec pod
Jun 17 23:20:19.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Jun 17 23:20:19.603: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Jun 17 23:20:19.603: INFO: stdout: "nodeport-update-service-9nsxw"
Jun 17 23:20:19.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.10.38 80'
Jun 17 23:20:19.824: INFO: stderr: "+ nc -v -t -w 2 10.233.10.38 80\n+ echo hostName\nConnection to 10.233.10.38 80 port [tcp/http] succeeded!\n"
Jun 17 23:20:19.824: INFO: stdout: "nodeport-update-service-9nsxw"
Jun 17 23:20:19.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:20.104: INFO: rc: 1
Jun 17 23:20:20.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:21.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:21.762: INFO: rc: 1
Jun 17 23:20:21.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:22.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:22.959: INFO: rc: 1
Jun 17 23:20:22.959: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:23.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:24.648: INFO: rc: 1
Jun 17 23:20:24.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:25.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:25.454: INFO: rc: 1
Jun 17 23:20:25.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:26.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:27.434: INFO: rc: 1
Jun 17 23:20:27.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:28.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:28.474: INFO: rc: 1
Jun 17 23:20:28.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:29.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:29.377: INFO: rc: 1
Jun 17 23:20:29.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:30.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:30.383: INFO: rc: 1
Jun 17 23:20:30.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:31.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:31.427: INFO: rc: 1
Jun 17 23:20:31.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:32.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:32.731: INFO: rc: 1
Jun 17 23:20:32.731: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:33.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:33.551: INFO: rc: 1
Jun 17 23:20:33.551: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:34.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:34.559: INFO: rc: 1
Jun 17 23:20:34.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:35.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:35.398: INFO: rc: 1
Jun 17 23:20:35.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:36.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:36.646: INFO: rc: 1
Jun 17 23:20:36.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:37.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:37.643: INFO: rc: 1
Jun 17 23:20:37.643: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:38.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:38.486: INFO: rc: 1
Jun 17 23:20:38.486: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:39.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:39.617: INFO: rc: 1
Jun 17 23:20:39.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:40.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:40.369: INFO: rc: 1
Jun 17 23:20:40.369: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:41.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:41.349: INFO: rc: 1
Jun 17 23:20:41.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:42.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:42.383: INFO: rc: 1
Jun 17 23:20:42.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:43.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:43.468: INFO: rc: 1
Jun 17 23:20:43.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:44.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:45.863: INFO: rc: 1
Jun 17 23:20:45.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:46.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:46.662: INFO: rc: 1
Jun 17 23:20:46.662: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:47.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:47.928: INFO: rc: 1
Jun 17 23:20:47.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:48.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:48.660: INFO: rc: 1
Jun 17 23:20:48.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:49.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:50.166: INFO: rc: 1
Jun 17 23:20:50.166: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:51.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:51.393: INFO: rc: 1
Jun 17 23:20:51.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:52.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:52.424: INFO: rc: 1
Jun 17 23:20:52.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:53.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:53.456: INFO: rc: 1
Jun 17 23:20:53.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:54.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:54.371: INFO: rc: 1
Jun 17 23:20:54.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:55.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:55.674: INFO: rc: 1
Jun 17 23:20:55.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:56.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:56.859: INFO: rc: 1
Jun 17 23:20:56.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:57.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:57.620: INFO: rc: 1
Jun 17 23:20:57.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:58.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:58.656: INFO: rc: 1
Jun 17 23:20:58.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:20:59.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:20:59.855: INFO: rc: 1
Jun 17 23:20:59.855: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:00.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:00.400: INFO: rc: 1
Jun 17 23:21:00.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:01.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:01.420: INFO: rc: 1
Jun 17 23:21:01.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:02.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:02.351: INFO: rc: 1
Jun 17 23:21:02.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:03.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:03.409: INFO: rc: 1
Jun 17 23:21:03.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:04.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:04.377: INFO: rc: 1
Jun 17 23:21:04.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:05.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:05.356: INFO: rc: 1
Jun 17 23:21:05.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:06.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:06.388: INFO: rc: 1
Jun 17 23:21:06.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:07.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:07.362: INFO: rc: 1
Jun 17 23:21:07.362: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:08.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:08.352: INFO: rc: 1
Jun 17 23:21:08.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:09.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:09.374: INFO: rc: 1
Jun 17 23:21:09.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:10.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:10.387: INFO: rc: 1
Jun 17 23:21:10.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:11.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:11.568: INFO: rc: 1
Jun 17 23:21:11.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:12.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:12.480: INFO: rc: 1
Jun 17 23:21:12.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:13.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:13.390: INFO: rc: 1
Jun 17 23:21:13.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:14.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:15.566: INFO: rc: 1
Jun 17 23:21:15.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:16.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:16.543: INFO: rc: 1
Jun 17 23:21:16.543: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:17.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:17.409: INFO: rc: 1
Jun 17 23:21:17.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:18.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:18.461: INFO: rc: 1
Jun 17 23:21:18.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:19.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:19.404: INFO: rc: 1
Jun 17 23:21:19.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:20.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:20.380: INFO: rc: 1
Jun 17 23:21:20.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:21.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:21.357: INFO: rc: 1
Jun 17 23:21:21.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:22.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:22.437: INFO: rc: 1
Jun 17 23:21:22.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:23.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:23.375: INFO: rc: 1
Jun 17 23:21:23.375: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:24.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:24.360: INFO: rc: 1
Jun 17 23:21:24.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:25.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:25.387: INFO: rc: 1
Jun 17 23:21:25.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:26.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:26.367: INFO: rc: 1
Jun 17 23:21:26.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:27.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:27.363: INFO: rc: 1
Jun 17 23:21:27.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:28.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:28.374: INFO: rc: 1
Jun 17 23:21:28.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:29.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:29.460: INFO: rc: 1
Jun 17 23:21:29.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:30.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:30.349: INFO: rc: 1
Jun 17 23:21:30.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:31.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:31.416: INFO: rc: 1
Jun 17 23:21:31.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:32.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:32.384: INFO: rc: 1
Jun 17 23:21:32.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:33.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:35.321: INFO: rc: 1
Jun 17 23:21:35.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:36.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:36.681: INFO: rc: 1
Jun 17 23:21:36.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:37.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:37.380: INFO: rc: 1
Jun 17 23:21:37.380: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:38.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:38.381: INFO: rc: 1
Jun 17 23:21:38.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:39.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:39.409: INFO: rc: 1
Jun 17 23:21:39.409: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:40.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:40.352: INFO: rc: 1
Jun 17 23:21:40.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:41.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:41.360: INFO: rc: 1
Jun 17 23:21:41.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:42.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:42.361: INFO: rc: 1
Jun 17 23:21:42.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:43.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:43.363: INFO: rc: 1
Jun 17 23:21:43.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:44.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:45.511: INFO: rc: 1
Jun 17 23:21:45.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:46.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:46.503: INFO: rc: 1
Jun 17 23:21:46.503: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:47.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:47.373: INFO: rc: 1
Jun 17 23:21:47.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:48.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:48.347: INFO: rc: 1
Jun 17 23:21:48.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:49.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:49.379: INFO: rc: 1
Jun 17 23:21:49.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:50.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:50.367: INFO: rc: 1
Jun 17 23:21:50.367: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:51.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:51.379: INFO: rc: 1
Jun 17 23:21:51.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:52.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:52.419: INFO: rc: 1
Jun 17 23:21:52.419: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:53.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:53.368: INFO: rc: 1
Jun 17 23:21:53.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:54.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:54.348: INFO: rc: 1
Jun 17 23:21:54.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:55.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:55.371: INFO: rc: 1
Jun 17 23:21:55.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:56.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:56.364: INFO: rc: 1
Jun 17 23:21:56.364: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:57.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:57.355: INFO: rc: 1
Jun 17 23:21:57.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:58.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:58.356: INFO: rc: 1
Jun 17 23:21:58.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:21:59.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:21:59.393: INFO: rc: 1
Jun 17 23:21:59.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:00.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:00.365: INFO: rc: 1
Jun 17 23:22:00.365: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:01.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:01.374: INFO: rc: 1
Jun 17 23:22:01.374: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
+ echo hostName
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:02.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:02.355: INFO: rc: 1
Jun 17 23:22:02.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:03.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:03.373: INFO: rc: 1
Jun 17 23:22:03.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:04.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:04.363: INFO: rc: 1
Jun 17 23:22:04.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:05.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:05.357: INFO: rc: 1
Jun 17 23:22:05.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:06.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:06.362: INFO: rc: 1
Jun 17 23:22:06.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:07.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:07.360: INFO: rc: 1
Jun 17 23:22:07.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:08.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:08.371: INFO: rc: 1
Jun 17 23:22:08.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:09.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:09.382: INFO: rc: 1
Jun 17 23:22:09.382: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:10.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:10.363: INFO: rc: 1
Jun 17 23:22:10.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:11.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:11.366: INFO: rc: 1
Jun 17 23:22:11.366: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:12.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:12.350: INFO: rc: 1
Jun 17 23:22:12.350: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:13.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:13.362: INFO: rc: 1
Jun 17 23:22:13.362: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:14.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:15.392: INFO: rc: 1
Jun 17 23:22:15.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:16.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:16.370: INFO: rc: 1
Jun 17 23:22:16.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:17.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:17.381: INFO: rc: 1
Jun 17 23:22:17.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:18.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:18.361: INFO: rc: 1
Jun 17 23:22:18.361: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:19.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:19.348: INFO: rc: 1
Jun 17 23:22:19.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:20.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:20.354: INFO: rc: 1
Jun 17 23:22:20.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:20.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458'
Jun 17 23:22:20.605: INFO: rc: 1
Jun 17 23:22:20.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3125 exec execpod7j5pc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30458:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30458
nc: connect to 10.10.190.207 port 30458 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 17 23:22:20.606: FAIL: Unexpected error:
    <*errors.errorString | 0xc003d74c80>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30458 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30458 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001934f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001934f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001934f00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Jun 17 23:22:20.607: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-3125".
STEP: Found 17 events.
Jun 17 23:22:20.625: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod7j5pc: { } Scheduled: Successfully assigned services-3125/execpod7j5pc to node1
Jun 17 23:22:20.625: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-9nsxw: { } Scheduled: Successfully assigned services-3125/nodeport-update-service-9nsxw to node2
Jun 17 23:22:20.625: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-k4mst: { } Scheduled: Successfully assigned services-3125/nodeport-update-service-k4mst to node1
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:56 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-9nsxw
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:56 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-k4mst
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:58 +0000 UTC - event for nodeport-update-service-9nsxw: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:58 +0000 UTC - event for nodeport-update-service-9nsxw: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 287.65546ms
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:59 +0000 UTC - event for nodeport-update-service-9nsxw: {kubelet node2} Created: Created container nodeport-update-service
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:19:59 +0000 UTC - event for nodeport-update-service-k4mst: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:20:00 +0000 UTC - event for nodeport-update-service-9nsxw: {kubelet node2} Started: Started container nodeport-update-service
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:20:00 +0000 UTC - event for nodeport-update-service-k4mst: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 293.954281ms
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:20:00 +0000 UTC - event for nodeport-update-service-k4mst: {kubelet node1} Created: Created container nodeport-update-service
Jun 17 23:22:20.625: INFO: At 2022-06-17 23:20:01 +0000 UTC - event for nodeport-update-service-k4mst: {kubelet node1} Started: Started container nodeport-update-service
Jun 17 23:22:20.626: INFO: At 2022-06-17 23:20:12 +0000 UTC - event for execpod7j5pc: {kubelet node1} Created: Created container agnhost-container
Jun 17 23:22:20.626: INFO: At 2022-06-17 23:20:12 +0000 UTC - event for execpod7j5pc: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:20.626: INFO: At 2022-06-17 23:20:12 +0000 UTC - event for execpod7j5pc: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 383.909108ms
Jun 17 23:22:20.626: INFO: At 2022-06-17 23:20:13 +0000 UTC - event for execpod7j5pc: {kubelet node1} Started: Started container agnhost-container
Jun 17 23:22:20.628: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Jun 17 23:22:20.628: INFO: execpod7j5pc                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:08 +0000 UTC  }]
Jun 17 23:22:20.628: INFO: nodeport-update-service-9nsxw  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:56 +0000 UTC  }]
Jun 17 23:22:20.628: INFO: nodeport-update-service-k4mst  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:19:56 +0000 UTC  }]
Jun 17 23:22:20.628: INFO: 
Jun 17 23:22:20.632: INFO: 
Logging node info for node master1
Jun 17 23:22:20.635: INFO: Node Info: &Node{ObjectMeta:{master1    47691bb2-4ee9-4386-8bec-0f9db1917afd 75968 0 2022-06-17 19:59:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:14 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:14 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:14 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:14 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:20.636: INFO: 
Logging kubelet events for node master1
Jun 17 23:22:20.638: INFO: 
Logging pods the kubelet thinks is on node master1
Jun 17 23:22:20.665: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:20.665: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container docker-registry ready: true, restart count 0
Jun 17 23:22:20.665: INFO: 	Container nginx ready: true, restart count 0
Jun 17 23:22:20.665: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.665: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:20.665: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun 17 23:22:20.665: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:20.665: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:20.665: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:20.665: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:20.665: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.665: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:20.754: INFO: 
Latency metrics for node master1
Jun 17 23:22:20.754: INFO: 
Logging node info for node master2
Jun 17 23:22:20.757: INFO: Node Info: &Node{ObjectMeta:{master2    71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 75961 0 2022-06-17 19:59:29 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:11 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:11 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:11 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:11 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:20.757: INFO: 
Logging kubelet events for node master2
Jun 17 23:22:20.759: INFO: 
Logging pods the kubelet thinks is on node master2
Jun 17 23:22:20.773: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container nfd-controller ready: true, restart count 0
Jun 17 23:22:20.773: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.773: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:20.773: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:20.773: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:22:20.773: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:20.773: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:20.773: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:22:20.773: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container autoscaler ready: true, restart count 1
Jun 17 23:22:20.773: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:20.773: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:22:20.773: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.773: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:20.862: INFO: 
Latency metrics for node master2
Jun 17 23:22:20.862: INFO: 
Logging node info for node master3
Jun 17 23:22:20.866: INFO: Node Info: &Node{ObjectMeta:{master3    4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 75985 0 2022-06-17 19:59:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:20 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:20 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:20 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:20 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:20.866: INFO: 
Logging kubelet events for node master3
Jun 17 23:22:20.869: INFO: 
Logging pods the kubelet thinks is on node master3
Jun 17 23:22:20.887: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:20.887: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:22:20.887: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.887: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun 17 23:22:20.887: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.887: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:20.887: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:20.887: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:22:20.887: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:22:20.887: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Init container install-cni ready: true, restart count 0
Jun 17 23:22:20.887: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:20.887: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.887: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:20.970: INFO: 
Latency metrics for node master3
Jun 17 23:22:20.970: INFO: 
Logging node info for node node1
Jun 17 23:22:20.974: INFO: Node Info: &Node{ObjectMeta:{node1    2db3a28c-448f-4511-9db8-4ef739b681b1 75973 0 2022-06-17 20:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:17 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:17 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:17 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:17 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:20.975: INFO: 
Logging kubelet events for node node1
Jun 17 23:22:20.977: INFO: 
Logging pods the kubelet thinks is on node node1
Jun 17 23:22:20.997: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
Jun 17 23:22:20.997: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container tas-extender ready: true, restart count 0
Jun 17 23:22:20.997: INFO: nodeport-update-service-k4mst started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 17 23:22:20.997: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:22:20.997: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container config-reloader ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container grafana ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container prometheus ready: true, restart count 1
Jun 17 23:22:20.997: INFO: execpod7j5pc started at 2022-06-17 23:20:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container agnhost-container ready: true, restart count 0
Jun 17 23:22:20.997: INFO: netserver-0 started at 2022-06-17 23:20:45 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:20.997: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.997: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:20.997: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:20.997: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:22:20.997: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:22:20.997: INFO: 	Container init ready: false, restart count 0
Jun 17 23:22:20.997: INFO: 	Container install ready: false, restart count 0
Jun 17 23:22:20.997: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:20.997: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:20.997: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.997: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun 17 23:22:20.997: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.998: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:20.998: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:20.998: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:22:20.998: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:22:20.998: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.998: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:22:20.998: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:20.998: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:21.190: INFO: 
Latency metrics for node node1
Jun 17 23:22:21.190: INFO: 
Logging node info for node node2
Jun 17 23:22:21.193: INFO: Node Info: &Node{ObjectMeta:{node2    467d2582-10f7-475b-9f20-5b7c2e46267a 75966 0 2022-06-17 20:00:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-17 23:05:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:13 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:13 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:13 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:13 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:21.194: INFO: 
Logging kubelet events for node node2
Jun 17 23:22:21.196: INFO: 
Logging pods the kubelet thinks is on node node2
Jun 17 23:22:21.214: INFO: test-container-pod started at 2022-06-17 23:21:10 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:21.214: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun 17 23:22:21.214: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:22:21.214: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:21.214: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:21.214: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:21.214: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:21.214: INFO: netserver-1 started at 2022-06-17 23:20:45 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:21.214: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:22:21.214: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:22:21.214: INFO: 	Container init ready: false, restart count 0
Jun 17 23:22:21.214: INFO: 	Container install ready: false, restart count 0
Jun 17 23:22:21.214: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:22:21.214: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:22:21.214: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:22:21.214: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:22:21.214: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:22:21.214: INFO: nodeport-update-service-9nsxw started at 2022-06-17 23:19:56 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container nodeport-update-service ready: true, restart count 0
Jun 17 23:22:21.214: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:22:21.214: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:21.214: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:21.214: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:21.356: INFO: 
Latency metrics for node node2
Jun 17 23:22:21.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3125" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [145.148 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Jun 17 23:22:20.606: Unexpected error:
      <*errors.errorString | 0xc003d74c80>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30458 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30458 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":316,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Jun 17 23:22:21.373: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jun 17 23:20:44.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-3241
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jun 17 23:20:44.887: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jun 17 23:20:44.920: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:46.925: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:48.924: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jun 17 23:20:50.924: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:52.924: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:54.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:56.926: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:20:58.925: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:00.924: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:02.925: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:04.927: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jun 17 23:21:06.925: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jun 17 23:21:06.930: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:21:08.935: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jun 17 23:21:10.934: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jun 17 23:21:16.959: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Jun 17 23:21:16.959: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Jun 17 23:21:16.980: INFO: Service node-port-service in namespace nettest-3241 found.
Jun 17 23:21:16.995: INFO: Service session-affinity-service in namespace nettest-3241 found.
STEP: Waiting for NodePort service to expose endpoint
Jun 17 23:21:17.999: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Jun 17 23:21:19.001: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: creating a second service with same selector
Jun 17 23:21:19.015: INFO: Service second-node-port-service in namespace nettest-3241 found.
Jun 17 23:21:20.018: INFO: Waiting for amount of service:second-node-port-service endpoints to be 2
STEP: dialing(http) netserver-0 (endpoint) --> 10.233.35.160:80 (config.clusterIP)
Jun 17 23:21:20.023: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.233.35.160&port=80&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:20.023: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:20.124: INFO: Waiting for responses: map[netserver-0:{}]
Jun 17 23:21:22.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.233.35.160&port=80&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:22.128: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:22.217: INFO: Waiting for responses: map[]
Jun 17 23:21:22.217: INFO: reached 10.233.35.160 after 1/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 10.10.190.207:30440 (nodeIP)
Jun 17 23:21:22.220: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:22.220: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:22.347: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:24.352: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:24.352: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:24.450: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:26.454: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:26.454: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:26.882: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:28.889: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:28.889: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:29.150: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:31.154: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:31.154: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:31.270: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:33.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:33.273: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:33.611: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:35.620: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:35.620: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:35.845: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:37.849: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:37.849: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:37.953: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:39.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:39.960: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:40.041: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:42.046: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:42.046: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:42.131: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:44.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:44.136: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:44.218: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:46.223: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:46.223: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:46.479: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:48.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:48.486: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:48.575: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:50.578: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:50.578: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:50.659: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:52.663: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:52.663: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:52.891: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:54.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:54.896: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:54.975: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:56.978: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:56.978: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:57.057: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:21:59.061: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:21:59.061: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:21:59.139: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:01.145: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:01.145: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:01.233: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:03.236: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:03.236: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:03.314: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:05.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:05.319: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:05.405: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:07.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:07.410: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:07.496: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:09.504: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:09.504: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:09.602: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:11.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:11.608: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:11.694: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:13.700: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:13.700: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:13.782: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:15.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:15.787: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:15.894: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:17.898: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:17.898: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:17.984: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:19.993: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:19.993: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:20.089: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:22.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:22.094: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:22.181: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:24.188: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:24.188: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:24.291: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:26.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:26.296: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:26.380: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:28.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:28.386: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:28.473: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:30.479: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:30.479: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:30.557: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:32.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'] Namespace:nettest-3241 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jun 17 23:22:32.562: INFO: >>> kubeConfig: /root/.kube/config
Jun 17 23:22:32.642: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Jun 17 23:22:34.646: INFO: 
Output of kubectl describe pod nettest-3241/netserver-0:

Jun 17 23:22:34.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3241 describe pod netserver-0 --namespace=nettest-3241'
Jun 17 23:22:34.853: INFO: stderr: ""
Jun 17 23:22:34.853: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-3241\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Fri, 17 Jun 2022 23:20:45 +0000\nLabels:       selector-d42884b5-0345-44ef-ad9d-474db5275cf6=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.105\"\n                    ],\n                    \"mac\": \"de:ca:a9:8e:a6:5c\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.105\"\n                    ],\n                    \"mac\": \"de:ca:a9:8e:a6:5c\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.105\nIPs:\n  IP:  10.244.4.105\nContainers:\n  webserver:\n    Container ID:  docker://6ef6a2910643a301b2c9a661dd74745ed3c619e3bb1623e17262fa4ceff79798\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 17 Jun 2022 23:20:48 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4qvz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-j4qvz:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-3241/netserver-0 to node1\n  Normal  Pulling    107s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     106s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 240.020897ms\n  Normal  Created    106s  kubelet            Created container webserver\n  Normal  Started    106s  kubelet            Started container webserver\n"
Jun 17 23:22:34.854: INFO: Name:         netserver-0
Namespace:    nettest-3241
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Fri, 17 Jun 2022 23:20:45 +0000
Labels:       selector-d42884b5-0345-44ef-ad9d-474db5275cf6=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.105"
                    ],
                    "mac": "de:ca:a9:8e:a6:5c",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.105"
                    ],
                    "mac": "de:ca:a9:8e:a6:5c",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.105
IPs:
  IP:  10.244.4.105
Containers:
  webserver:
    Container ID:  docker://6ef6a2910643a301b2c9a661dd74745ed3c619e3bb1623e17262fa4ceff79798
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 17 Jun 2022 23:20:48 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4qvz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-j4qvz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-3241/netserver-0 to node1
  Normal  Pulling    107s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     106s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 240.020897ms
  Normal  Created    106s  kubelet            Created container webserver
  Normal  Started    106s  kubelet            Started container webserver

Jun 17 23:22:34.854: INFO: 
Output of kubectl describe pod nettest-3241/netserver-1:

Jun 17 23:22:34.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3241 describe pod netserver-1 --namespace=nettest-3241'
Jun 17 23:22:35.058: INFO: stderr: ""
Jun 17 23:22:35.058: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-3241\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Fri, 17 Jun 2022 23:20:45 +0000\nLabels:       selector-d42884b5-0345-44ef-ad9d-474db5275cf6=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.139\"\n                    ],\n                    \"mac\": \"82:48:d6:0b:d6:9d\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.139\"\n                    ],\n                    \"mac\": \"82:48:d6:0b:d6:9d\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.139\nIPs:\n  IP:  10.244.3.139\nContainers:\n  webserver:\n    Container ID:  docker://c7f65a7f1643fc4b8e1f847f17f87e678848be59f0b2ef0bc355532dd7f4dc49\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 17 Jun 2022 23:20:54 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn7cp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kn7cp:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  110s  default-scheduler  Successfully assigned nettest-3241/netserver-1 to node2\n  Normal  Pulling    103s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     103s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 303.772462ms\n  Normal  Created    102s  kubelet            Created container webserver\n  Normal  Started    101s  kubelet            Started container webserver\n"
Jun 17 23:22:35.058: INFO: Name:         netserver-1
Namespace:    nettest-3241
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Fri, 17 Jun 2022 23:20:45 +0000
Labels:       selector-d42884b5-0345-44ef-ad9d-474db5275cf6=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.139"
                    ],
                    "mac": "82:48:d6:0b:d6:9d",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.139"
                    ],
                    "mac": "82:48:d6:0b:d6:9d",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.139
IPs:
  IP:  10.244.3.139
Containers:
  webserver:
    Container ID:  docker://c7f65a7f1643fc4b8e1f847f17f87e678848be59f0b2ef0bc355532dd7f4dc49
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 17 Jun 2022 23:20:54 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn7cp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-kn7cp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  110s  default-scheduler  Successfully assigned nettest-3241/netserver-1 to node2
  Normal  Pulling    103s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     103s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 303.772462ms
  Normal  Created    102s  kubelet            Created container webserver
  Normal  Started    101s  kubelet            Started container webserver

Jun 17 23:22:35.058: INFO: encountered error during dial (did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}])
Jun 17 23:22:35.059: FAIL: failed dialing endpoint, did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00044c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00044c480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00044c480, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-3241".
STEP: Found 15 events.
Jun 17 23:22:35.064: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned nettest-3241/netserver-0 to node1
Jun 17 23:22:35.064: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned nettest-3241/netserver-1 to node2
Jun 17 23:22:35.064: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned nettest-3241/test-container-pod to node2
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:47 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:48 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 240.020897ms
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:48 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:48 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 303.772462ms
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:53 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:20:54 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:21:12 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:21:13 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 304.196625ms
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:21:13 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
Jun 17 23:22:35.064: INFO: At 2022-06-17 23:21:13 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
Jun 17 23:22:35.067: INFO: POD                 NODE   PHASE    GRACE  CONDITIONS
Jun 17 23:22:35.067: INFO: netserver-0         node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:44 +0000 UTC  }]
Jun 17 23:22:35.067: INFO: netserver-1         node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:20:44 +0000 UTC  }]
Jun 17 23:22:35.068: INFO: test-container-pod  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-17 23:21:10 +0000 UTC  }]
Jun 17 23:22:35.068: INFO: 
Jun 17 23:22:35.073: INFO: 
Logging node info for node master1
Jun 17 23:22:35.076: INFO: Node Info: &Node{ObjectMeta:{master1    47691bb2-4ee9-4386-8bec-0f9db1917afd 76056 0 2022-06-17 19:59:00 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-06-17 20:06:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:36 +0000 UTC,LastTransitionTime:2022-06-17 20:04:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:34 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:34 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:34 +0000 UTC,LastTransitionTime:2022-06-17 19:58:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:34 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f59e69c8e0cc41ff966b02f015e9cf30,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:81e1dc93-cb0d-4bf9-b7c4-28e0b4aef603,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:35.076: INFO: 
Logging kubelet events for node master1
Jun 17 23:22:35.079: INFO: 
Logging pods the kubelet thinks is on node master1
Jun 17 23:22:35.088: INFO: kube-apiserver-master1 started at 2022-06-17 20:00:04 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:35.088: INFO: kube-controller-manager-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:35.088: INFO: kube-flannel-z9nqz started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:35.088: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:35.088: INFO: kube-multus-ds-amd64-rqb4r started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:35.088: INFO: kube-scheduler-master1 started at 2022-06-17 20:08:08 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-scheduler ready: true, restart count 0
Jun 17 23:22:35.088: INFO: kube-proxy-b2xlr started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:35.088: INFO: container-registry-65d7c44b96-hq7rp started at 2022-06-17 20:06:17 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container docker-registry ready: true, restart count 0
Jun 17 23:22:35.088: INFO: 	Container nginx ready: true, restart count 0
Jun 17 23:22:35.088: INFO: node-exporter-bts5h started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.088: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.088: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:35.177: INFO: 
Latency metrics for node master1
Jun 17 23:22:35.177: INFO: 
Logging node info for node master2
Jun 17 23:22:35.181: INFO: Node Info: &Node{ObjectMeta:{master2    71ab7827-6f85-4ecf-82ce-5b27d8ba1a11 76049 0 2022-06-17 19:59:29 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-06-17 20:01:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-06-17 20:09:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:35 +0000 UTC,LastTransitionTime:2022-06-17 20:04:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:31 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:31 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:31 +0000 UTC,LastTransitionTime:2022-06-17 19:59:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:31 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ba0363db4fd2476098c500989c8b1fd5,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cafb2298-e9e8-4bc9-82ab-0feb6c416066,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:35.181: INFO: 
Logging kubelet events for node master2
Jun 17 23:22:35.183: INFO: 
Logging pods the kubelet thinks is on node master2
Jun 17 23:22:35.193: INFO: kube-controller-manager-master2 started at 2022-06-17 20:08:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:35.194: INFO: kube-scheduler-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:22:35.194: INFO: kube-flannel-kmc7f started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:35.194: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:35.194: INFO: node-feature-discovery-controller-cff799f9f-zlzkd started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container nfd-controller ready: true, restart count 0
Jun 17 23:22:35.194: INFO: node-exporter-ccmb2 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.194: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:35.194: INFO: kube-apiserver-master2 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:35.194: INFO: kube-proxy-52p78 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:22:35.194: INFO: kube-multus-ds-amd64-spg7h started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:35.194: INFO: coredns-8474476ff8-55pd7 started at 2022-06-17 20:02:14 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:22:35.194: INFO: dns-autoscaler-7df78bfcfb-ml447 started at 2022-06-17 20:02:16 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.194: INFO: 	Container autoscaler ready: true, restart count 1
Jun 17 23:22:35.288: INFO: 
Latency metrics for node master2
Jun 17 23:22:35.288: INFO: 
Logging node info for node master3
Jun 17 23:22:35.291: INFO: Node Info: &Node{ObjectMeta:{master3    4495d2b3-3dc7-45fa-93e4-2ad5ef91730e 76045 0 2022-06-17 19:59:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-06-17 19:59:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-06-17 20:00:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-06-17 20:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-06-17 20:12:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:30 +0000 UTC,LastTransitionTime:2022-06-17 19:59:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:30 +0000 UTC,LastTransitionTime:2022-06-17 20:01:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e420146228b341cbbaf470c338ef023e,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:88e9c5d2-4324-4e63-8acf-ee80e9511e70,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:35.292: INFO: 
Logging kubelet events for node master3
Jun 17 23:22:35.294: INFO: 
Logging pods the kubelet thinks is on node master3
Jun 17 23:22:35.303: INFO: node-exporter-tv8q4 started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.303: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:35.303: INFO: kube-apiserver-master3 started at 2022-06-17 20:00:05 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-apiserver ready: true, restart count 0
Jun 17 23:22:35.303: INFO: kube-scheduler-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-scheduler ready: true, restart count 2
Jun 17 23:22:35.303: INFO: kube-proxy-qw2lh started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-proxy ready: true, restart count 1
Jun 17 23:22:35.303: INFO: kube-flannel-7sp2w started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Init container install-cni ready: true, restart count 0
Jun 17 23:22:35.303: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:35.303: INFO: kube-multus-ds-amd64-vtvhp started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:35.303: INFO: kube-controller-manager-master3 started at 2022-06-17 20:08:07 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-controller-manager ready: true, restart count 2
Jun 17 23:22:35.303: INFO: coredns-8474476ff8-plfdq started at 2022-06-17 20:02:18 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container coredns ready: true, restart count 1
Jun 17 23:22:35.303: INFO: prometheus-operator-585ccfb458-kz9ss started at 2022-06-17 20:14:47 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.303: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.303: INFO: 	Container prometheus-operator ready: true, restart count 0
Jun 17 23:22:35.394: INFO: 
Latency metrics for node master3
Jun 17 23:22:35.394: INFO: 
Logging node info for node node1
Jun 17 23:22:35.397: INFO: Node Info: &Node{ObjectMeta:{node1    2db3a28c-448f-4511-9db8-4ef739b681b1 76034 0 2022-06-17 20:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:34 +0000 UTC,LastTransitionTime:2022-06-17 20:04:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:27 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:27 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:27 +0000 UTC,LastTransitionTime:2022-06-17 20:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:27 +0000 UTC,LastTransitionTime:2022-06-17 20:01:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b4b206100a5d45e9959c4a79c836676a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:5a19e1a7-8d9a-4724-83a4-bd77b1a0f8f4,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1007077455,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[localhost:30500/tasextender@sha256:a226a9c613b9eeed89115dd78ba697306e50d1b4466033c8415371714720c861 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:35.398: INFO: 
Logging kubelet events for node node1
Jun 17 23:22:35.400: INFO: 
Logging pods the kubelet thinks is on node node1
Jun 17 23:22:35.414: INFO: kubernetes-dashboard-785dcbb76d-26kg6 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container kubernetes-dashboard ready: true, restart count 2
Jun 17 23:22:35.414: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv started at 2022-06-17 20:17:57 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container tas-extender ready: true, restart count 0
Jun 17 23:22:35.414: INFO: netserver-0 started at 2022-06-17 23:20:45 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:35.414: INFO: node-feature-discovery-worker-dgp4b started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:22:35.414: INFO: prometheus-k8s-0 started at 2022-06-17 20:14:56 +0000 UTC (0+4 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container config-reloader ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container grafana ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container prometheus ready: true, restart count 1
Jun 17 23:22:35.414: INFO: collectd-5src2 started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.414: INFO: kube-flannel-wqcwq started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:35.414: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:35.414: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:22:35.414: INFO: cmk-init-discover-node1-bvmrv started at 2022-06-17 20:13:02 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:22:35.414: INFO: 	Container init ready: false, restart count 0
Jun 17 23:22:35.414: INFO: 	Container install ready: false, restart count 0
Jun 17 23:22:35.414: INFO: node-exporter-8ftgl started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:35.414: INFO: cmk-webhook-6c9d5f8578-qcmrd started at 2022-06-17 20:13:52 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container cmk-webhook ready: true, restart count 0
Jun 17 23:22:35.414: INFO: kube-proxy-t4lqk started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:35.414: INFO: cmk-xh247 started at 2022-06-17 20:13:51 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:22:35.414: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:22:35.414: INFO: nginx-proxy-node1 started at 2022-06-17 20:00:39 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:22:35.414: INFO: kube-multus-ds-amd64-m6vf8 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.414: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:35.625: INFO: 
Latency metrics for node node1
Jun 17 23:22:35.625: INFO: 
Logging node info for node node2
Jun 17 23:22:35.628: INFO: Node Info: &Node{ObjectMeta:{node2    467d2582-10f7-475b-9f20-5b7c2e46267a 76054 0 2022-06-17 20:00:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.66.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-06-17 20:00:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-06-17 20:00:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-06-17 20:01:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-06-17 20:09:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-06-17 20:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-06-17 22:24:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-06-17 23:05:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-06-17 20:04:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:33 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:33 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:33 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-06-17 23:22:33 +0000 UTC,LastTransitionTime:2022-06-17 20:00:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-06-17 23:22:33 +0000 UTC,LastTransitionTime:2022-06-17 20:04:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b9e31fbb30d4e48b9ac063755a76deb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:5cd4c1a7-c6ca-496c-9122-4f944da708e6,KernelVersion:3.10.0-1160.66.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[localhost:30500/cmk@sha256:7227e64d78c2a9dd290de0ec1cbbaf536dad977fc3efca629dc87d6ffb97071e localhost:30500/cmk:v1.5.1],SizeBytes:727740703,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:21d7abd21ac65aac7d19aaa2b1b05a71e496b7bf6251c76df58855be9c3aaa59 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jun 17 23:22:35.629: INFO: 
Logging kubelet events for node node2
Jun 17 23:22:35.631: INFO: 
Logging pods the kubelet thinks is on node node2
Jun 17 23:22:35.642: INFO: kube-multus-ds-amd64-hblk4 started at 2022-06-17 20:01:47 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container kube-multus ready: true, restart count 1
Jun 17 23:22:35.642: INFO: cmk-5gtjq started at 2022-06-17 20:13:52 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container nodereport ready: true, restart count 0
Jun 17 23:22:35.642: INFO: 	Container reconcile ready: true, restart count 0
Jun 17 23:22:35.642: INFO: collectd-6bcqz started at 2022-06-17 20:18:47 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container collectd ready: true, restart count 0
Jun 17 23:22:35.642: INFO: 	Container collectd-exporter ready: true, restart count 0
Jun 17 23:22:35.642: INFO: 	Container rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.642: INFO: nginx-proxy-node2 started at 2022-06-17 20:00:37 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container nginx-proxy ready: true, restart count 2
Jun 17 23:22:35.642: INFO: kube-proxy-pvtj6 started at 2022-06-17 20:00:43 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container kube-proxy ready: true, restart count 2
Jun 17 23:22:35.642: INFO: test-container-pod started at 2022-06-17 23:21:10 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:35.642: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 started at 2022-06-17 20:02:19 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Jun 17 23:22:35.642: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 started at 2022-06-17 20:10:41 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container kube-sriovdp ready: true, restart count 0
Jun 17 23:22:35.642: INFO: node-exporter-xgz6d started at 2022-06-17 20:14:54 +0000 UTC (0+2 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Jun 17 23:22:35.642: INFO: 	Container node-exporter ready: true, restart count 0
Jun 17 23:22:35.642: INFO: kube-flannel-plbl8 started at 2022-06-17 20:01:38 +0000 UTC (1+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Init container install-cni ready: true, restart count 2
Jun 17 23:22:35.642: INFO: 	Container kube-flannel ready: true, restart count 2
Jun 17 23:22:35.642: INFO: netserver-1 started at 2022-06-17 23:20:45 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container webserver ready: true, restart count 0
Jun 17 23:22:35.642: INFO: node-feature-discovery-worker-82r46 started at 2022-06-17 20:09:28 +0000 UTC (0+1 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container nfd-worker ready: true, restart count 0
Jun 17 23:22:35.642: INFO: cmk-init-discover-node2-z2vgz started at 2022-06-17 20:13:25 +0000 UTC (0+3 container statuses recorded)
Jun 17 23:22:35.642: INFO: 	Container discover ready: false, restart count 0
Jun 17 23:22:35.642: INFO: 	Container init ready: false, restart count 0
Jun 17 23:22:35.642: INFO: 	Container install ready: false, restart count 0
Jun 17 23:22:35.797: INFO: 
Latency metrics for node node2
Jun 17 23:22:35.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3241" for this suite.


• Failure [111.057 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Jun 17 23:22:35.059: failed dialing endpoint, did not find expected responses... 
    Tries 34
    Command curl -g -q -s 'http://10.244.4.105:8080/dial?request=hostname&protocol=http&host=10.10.190.207&port=30440&tries=1'
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector","total":-1,"completed":2,"skipped":331,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector"]}
Jun 17 23:22:35.813: INFO: Running AfterSuite actions on all nodes
Jun 17 23:22:35.813: INFO: Running AfterSuite actions on node 1
Jun 17 23:22:35.813: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for multiple endpoint-Services with same selector 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 27 of 5773 Specs in 213.348 seconds
FAIL! -- 24 Passed | 3 Failed | 0 Pending | 5746 Skipped


Ginkgo ran 1 suite in 3m35.068365364s
Test Suite Failed