Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636160425 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 6 01:00:27.614: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.619: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:00:27.648: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:00:27.718: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:00:27.718: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:00:27.718: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:00:27.718: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:00:27.718: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:00:27.732: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 6 01:00:27.732: INFO: e2e test version: v1.21.5 Nov 6 01:00:27.733: INFO: kube-apiserver version: v1.21.1 Nov 6 01:00:27.734: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.743: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 6 01:00:27.736: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.756: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ Nov 6 01:00:27.760: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.781: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Nov 6 01:00:27.765: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.787: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 6 01:00:27.767: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.789: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Nov 6 01:00:27.771: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.794: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 6 01:00:27.775: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.798: INFO: Cluster IP family: ipv4 Nov 6 01:00:27.779: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.801: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 6 01:00:27.776: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.798: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Nov 6 01:00:27.780: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:00:27.804: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:00:27.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W1106 01:00:27.822021 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:00:27.822: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:00:27.825: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Nov 6 01:00:27.845: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
W1106 01:00:27.894301      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:27.894: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:27.896: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov  6 01:00:27.898: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:27.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-6849" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:27.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-5523
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-5523
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:28.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5523" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•SSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":2,"skipped":35,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Nov  6 01:00:28.300: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:28.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-5562" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
W1106 01:00:28.427128      35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.427: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.429: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Nov  6 01:00:28.431: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:28.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-6676" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
W1106 01:00:28.645885      37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.646: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.647: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Nov  6 01:00:28.667: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Nov  6 01:00:28.670: INFO: starting watch
STEP: patching
STEP: updating
Nov  6 01:00:28.704: INFO: waiting for watch events with expected annotations
Nov  6 01:00:28.704: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Nov  6 01:00:28.704: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-1466" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":306,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:29.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Nov  6 01:00:30.739: INFO: (0) /api/v1/nodes/node1/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-1117
Nov  6 01:00:28.515: INFO: hairpin-test cluster ip: 10.233.30.0
STEP: creating a client/server pod
Nov  6 01:00:28.531: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.534: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.534: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.534: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.535: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:38.534: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:40.537: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-1117 to expose endpoints map[hairpin:[8080]]
Nov  6 01:00:40.546: INFO: successfully validated that service hairpin-test in namespace services-1117 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Nov  6 01:00:41.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1117 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Nov  6 01:00:42.226: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Nov  6 01:00:42.226: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Nov  6 01:00:42.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1117 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.30.0 8080'
Nov  6 01:00:42.453: INFO: stderr: "+ nc -v -t -w 2 10.233.30.0 8080\n+ echo hostName\nConnection to 10.233.30.0 8080 port [tcp/http-alt] succeeded!\n"
Nov  6 01:00:42.453: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:42.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1117" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.971 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":3,"skipped":207,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Nov  6 01:00:48.155: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Nov  6 01:00:48.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-test6ns5g -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Nov  6 01:00:48.388: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Nov  6 01:00:48.388: INFO: stdout: "10.244.0.8:52294"
STEP: Verifying the preserved source ip
Nov  6 01:00:48.388: INFO: Waiting up to 2m0s to get response from 10.244.3.166:8080
Nov  6 01:00:48.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-test6ns5g -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip'
Nov  6 01:00:48.638: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip\n"
Nov  6 01:00:48.639: INFO: stdout: "10.244.0.8:44770"
STEP: Verifying the preserved source ip
Nov  6 01:00:48.639: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Nov  6 01:00:48.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-test6ns5g -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Nov  6 01:00:48.864: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Nov  6 01:00:48.864: INFO: stdout: "10.244.0.8:46546"
STEP: Verifying the preserved source ip
Nov  6 01:00:48.864: INFO: Waiting up to 2m0s to get response from 10.244.4.242:8080
Nov  6 01:00:48.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-test6ns5g -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip'
Nov  6 01:00:49.114: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip\n"
Nov  6 01:00:49.114: INFO: stdout: "10.244.0.8:55986"
STEP: Verifying the preserved source ip
Nov  6 01:00:49.114: INFO: Waiting up to 2m0s to get response from 10.244.0.8:8080
Nov  6 01:00:49.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbj9v9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip'
Nov  6 01:00:49.354: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip\n"
Nov  6 01:00:49.355: INFO: stdout: "10.244.1.4:36116"
STEP: Verifying the preserved source ip
Nov  6 01:00:49.355: INFO: Waiting up to 2m0s to get response from 10.244.3.166:8080
Nov  6 01:00:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbj9v9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip'
Nov  6 01:00:49.599: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip\n"
Nov  6 01:00:49.599: INFO: stdout: "10.244.1.4:36696"
STEP: Verifying the preserved source ip
Nov  6 01:00:49.599: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Nov  6 01:00:49.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbj9v9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Nov  6 01:00:49.834: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Nov  6 01:00:49.834: INFO: stdout: "10.244.1.4:60374"
STEP: Verifying the preserved source ip
Nov  6 01:00:49.834: INFO: Waiting up to 2m0s to get response from 10.244.4.242:8080
Nov  6 01:00:49.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbj9v9 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip'
Nov  6 01:00:50.065: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip\n"
Nov  6 01:00:50.065: INFO: stdout: "10.244.1.4:39912"
STEP: Verifying the preserved source ip
Nov  6 01:00:50.065: INFO: Waiting up to 2m0s to get response from 10.244.0.8:8080
Nov  6 01:00:50.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbz695 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip'
Nov  6 01:00:50.350: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip\n"
Nov  6 01:00:50.350: INFO: stdout: "10.244.3.166:57700"
STEP: Verifying the preserved source ip
Nov  6 01:00:50.350: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Nov  6 01:00:50.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbz695 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Nov  6 01:00:50.612: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Nov  6 01:00:50.612: INFO: stdout: "10.244.3.166:54908"
STEP: Verifying the preserved source ip
Nov  6 01:00:50.612: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Nov  6 01:00:50.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbz695 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Nov  6 01:00:50.895: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Nov  6 01:00:50.895: INFO: stdout: "10.244.3.166:40038"
STEP: Verifying the preserved source ip
Nov  6 01:00:50.895: INFO: Waiting up to 2m0s to get response from 10.244.4.242:8080
Nov  6 01:00:50.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testbz695 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip'
Nov  6 01:00:51.545: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip\n"
Nov  6 01:00:51.546: INFO: stdout: "10.244.3.166:42198"
STEP: Verifying the preserved source ip
Nov  6 01:00:51.546: INFO: Waiting up to 2m0s to get response from 10.244.0.8:8080
Nov  6 01:00:51.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testmdgtj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip'
Nov  6 01:00:51.802: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip\n"
Nov  6 01:00:51.802: INFO: stdout: "10.244.2.7:37516"
STEP: Verifying the preserved source ip
Nov  6 01:00:51.802: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Nov  6 01:00:51.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testmdgtj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Nov  6 01:00:52.045: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Nov  6 01:00:52.045: INFO: stdout: "10.244.2.7:58400"
STEP: Verifying the preserved source ip
Nov  6 01:00:52.045: INFO: Waiting up to 2m0s to get response from 10.244.3.166:8080
Nov  6 01:00:52.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testmdgtj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip'
Nov  6 01:00:52.274: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip\n"
Nov  6 01:00:52.274: INFO: stdout: "10.244.2.7:47052"
STEP: Verifying the preserved source ip
Nov  6 01:00:52.274: INFO: Waiting up to 2m0s to get response from 10.244.4.242:8080
Nov  6 01:00:52.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testmdgtj -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip'
Nov  6 01:00:52.528: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.242:8080/clientip\n"
Nov  6 01:00:52.528: INFO: stdout: "10.244.2.7:50992"
STEP: Verifying the preserved source ip
Nov  6 01:00:52.528: INFO: Waiting up to 2m0s to get response from 10.244.0.8:8080
Nov  6 01:00:52.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testx8d7k -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip'
Nov  6 01:00:52.825: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.8:8080/clientip\n"
Nov  6 01:00:52.825: INFO: stdout: "10.244.4.242:47926"
STEP: Verifying the preserved source ip
Nov  6 01:00:52.825: INFO: Waiting up to 2m0s to get response from 10.244.1.4:8080
Nov  6 01:00:52.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testx8d7k -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip'
Nov  6 01:00:53.101: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.4:8080/clientip\n"
Nov  6 01:00:53.101: INFO: stdout: "10.244.4.242:54508"
STEP: Verifying the preserved source ip
Nov  6 01:00:53.101: INFO: Waiting up to 2m0s to get response from 10.244.3.166:8080
Nov  6 01:00:53.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testx8d7k -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip'
Nov  6 01:00:53.395: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.166:8080/clientip\n"
Nov  6 01:00:53.395: INFO: stdout: "10.244.4.242:45066"
STEP: Verifying the preserved source ip
Nov  6 01:00:53.395: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Nov  6 01:00:53.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4749 exec no-snat-testx8d7k -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Nov  6 01:00:53.804: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Nov  6 01:00:53.805: INFO: stdout: "10.244.4.242:37792"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:53.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-4749" for this suite.


• [SLOW TEST:25.745 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":1,"skipped":51,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:27.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1106 01:00:28.004005      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.004: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.005: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-7387
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:28.118: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:28.157: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.163: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.161: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:38.161: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:40.162: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:42.161: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:44.163: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:46.161: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:48.160: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:50.162: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:50.167: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:00:54.222: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:00:54.222: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:54.228: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:54.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7387" for this suite.


S [SKIPPING] [26.258 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1106 01:00:28.240693      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.240: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.242: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-5120
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:28.363: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:28.399: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.418: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.402: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:38.403: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:40.404: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:42.403: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:44.404: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:46.402: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:48.403: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:50.404: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:50.409: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:00:54.452: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:00:54.452: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:54.492: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:54.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5120" for this suite.


S [SKIPPING] [26.286 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1106 01:00:28.319571      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.319: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.321: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-2114
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:28.446: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:28.479: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.483: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.487: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.484: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.482: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:38.487: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:40.488: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:42.484: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:44.483: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:46.482: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:48.483: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:50.486: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:50.491: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:00:54.520: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:00:54.520: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:54.526: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:54.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2114" for this suite.


S [SKIPPING] [26.237 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1106 01:00:28.384005      32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.384: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.385: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-1488
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:28.498: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:28.535: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.540: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.539: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.543: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.540: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:38.541: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:40.544: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:42.538: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:44.544: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:46.540: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:48.541: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:48.546: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:00:54.590: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:00:54.590: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:54.597: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:54.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1488" for this suite.


S [SKIPPING] [26.255 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:30.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-8525
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:31.059: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:31.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:33.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:35.105: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:37.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:39.104: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:41.103: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:43.104: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:45.105: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:47.105: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:49.105: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:51.103: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:53.104: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:53.109: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:00:57.135: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:00:57.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:57.142: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:00:57.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8525" for this suite.


S [SKIPPING] [26.248 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:54.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4622.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4622.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4622.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4622.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4622.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov  6 01:01:06.348: INFO: DNS probes using dns-4622/dns-test-272c4649-d1fd-42d3-8a0b-95a538c59d05 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4622" for this suite.


• [SLOW TEST:12.107 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":1,"skipped":43,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:57.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Nov  6 01:00:57.327: INFO: Creating new exec pod
Nov  6 01:01:05.344: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Nov  6 01:01:05.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3862 exec execpod-noendpointsjrm6m -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Nov  6 01:01:06.621: INFO: rc: 1
Nov  6 01:01:06.621: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3862 exec execpod-noendpointsjrm6m -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:06.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3862" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:9.335 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":3,"skipped":680,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:06.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov  6 01:01:06.836: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:06.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7350" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:42.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Nov  6 01:00:42.556: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:44.559: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:46.559: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:48.559: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Nov  6 01:00:48.588: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:50.592: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:52.591: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:54.593: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Nov  6 01:00:56.611: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:58.614: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:00.615: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:02.615: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:04.616: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:06.614: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Nov  6 01:01:06.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-883 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Nov  6 01:01:06.880: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Nov  6 01:01:06.880: INFO: stdout: "tcp      6 3595 CLOSE_WAIT src=10.244.3.179 dst=10.10.190.208 sport=46650 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=14519 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Nov  6 01:01:06.880: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3595 CLOSE_WAIT src=10.244.3.179 dst=10.10.190.208 sport=46650 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=14519 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:06.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-883" for this suite.


• [SLOW TEST:24.377 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":4,"skipped":227,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:07.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Nov  6 01:01:07.173: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Nov  6 01:01:07.176: INFO: starting watch
STEP: patching
STEP: updating
Nov  6 01:01:07.187: INFO: waiting for watch events with expected annotations
Nov  6 01:01:07.187: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Nov  6 01:01:07.187: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:07.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-6901" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":925,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:53.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Nov  6 01:00:53.916: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-7b3feb7e-3bdd-4a40-9ffd-9b49df880cd0  dns-363  2a9780ea-d381-4374-8b6f-85aacaf06b38 78748 0 2021-11-06 01:00:53 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-06 01:00:53 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-lmd72,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-cxthz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cxthz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov  6 01:00:59.928: INFO: testServerIP is 10.244.4.5
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Nov  6 01:00:59.937: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-363  eaf4b913-a25a-4d32-891c-f4dd519c94e1 79043 0 2021-11-06 01:00:59 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-06 01:00:59 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-f4cnl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4cnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.5],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Nov  6 01:01:09.944: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-363 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:01:09.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Nov  6 01:01:10.047: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-363 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:01:10.047: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:01:10.209: INFO: Deleting pod e2e-dns-utils...
Nov  6 01:01:10.217: INFO: Deleting pod e2e-configmap-dns-server-7b3feb7e-3bdd-4a40-9ffd-9b49df880cd0...
Nov  6 01:01:10.223: INFO: Deleting configmap e2e-coredns-configmap-lmd72...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:10.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-363" for this suite.


• [SLOW TEST:16.357 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":76,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-8472
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:28.670: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:28.717: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.721: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.720: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.722: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.722: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:38.780: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:40.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:42.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:44.723: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:46.722: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:48.722: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:00:50.722: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:00:50.727: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:00:52.729: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:00:54.729: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:00:56.732: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:00:58.730: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:01:00.731: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:12.772: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:12.772: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:12.779: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:12.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8472" for this suite.


S [SKIPPING] [44.264 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:12.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-2599
STEP: changing service nodeport-range-test to out-of-range NodePort 5790
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 5790
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:12.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2599" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":296,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
W1106 01:00:28.278146      25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.278: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.280: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Nov  6 01:00:28.287: INFO: deploying iperf2 server
Nov  6 01:00:28.291: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Nov  6 01:00:28.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Nov  6 01:00:30.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:00:32.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:00:34.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:00:36.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:00:38.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757228, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:00:40.311: INFO: waiting for iperf2 server endpoints
Nov  6 01:00:42.316: INFO: found iperf2 server endpoints
Nov  6 01:00:42.316: INFO: waiting for client pods to be running
Nov  6 01:00:44.323: INFO: all client pods are ready: 2 pods
Nov  6 01:00:44.325: INFO: server pod phase Running
Nov  6 01:00:44.325: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-06 01:00:28 +0000 UTC Reason: Message:}
Nov  6 01:00:44.325: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-06 01:00:38 +0000 UTC Reason: Message:}
Nov  6 01:00:44.325: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-06 01:00:38 +0000 UTC Reason: Message:}
Nov  6 01:00:44.325: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-06 01:00:28 +0000 UTC Reason: Message:}
Nov  6 01:00:44.325: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-11-06 01:00:38 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://ad068971e2bf55696e44933200780082afa0f1e61ef7cca71879d7b87542d2f2 Started:0xc002d5b50c}
Nov  6 01:00:44.325: INFO: found 2 matching client pods
Nov  6 01:00:44.328: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-7321 PodName:iperf2-clients-qh4p7 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:00:44.328: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:00:44.422: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Nov  6 01:00:44.422: INFO: iperf version: 
Nov  6 01:00:44.422: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-qh4p7 (node node1)
Nov  6 01:00:44.455: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-7321 PodName:iperf2-clients-qh4p7 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:00:44.455: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:00:59.591: INFO: Exec stderr: ""
Nov  6 01:00:59.591: INFO: output from exec on client pod iperf2-clients-qh4p7 (node node1): 
20211106010045.552,10.244.3.172,54548,10.233.61.250,6789,3,0.0-1.0,117440512,939524096
20211106010046.560,10.244.3.172,54548,10.233.61.250,6789,3,1.0-2.0,114425856,915406848
20211106010047.546,10.244.3.172,54548,10.233.61.250,6789,3,2.0-3.0,112197632,897581056
20211106010048.553,10.244.3.172,54548,10.233.61.250,6789,3,3.0-4.0,117702656,941621248
20211106010049.540,10.244.3.172,54548,10.233.61.250,6789,3,4.0-5.0,116654080,933232640
20211106010050.547,10.244.3.172,54548,10.233.61.250,6789,3,5.0-6.0,115998720,927989760
20211106010051.553,10.244.3.172,54548,10.233.61.250,6789,3,6.0-7.0,115998720,927989760
20211106010052.559,10.244.3.172,54548,10.233.61.250,6789,3,7.0-8.0,117571584,940572672
20211106010053.548,10.244.3.172,54548,10.233.61.250,6789,3,8.0-9.0,114425856,915406848
20211106010054.556,10.244.3.172,54548,10.233.61.250,6789,3,9.0-10.0,116260864,930086912
20211106010054.556,10.244.3.172,54548,10.233.61.250,6789,3,0.0-10.0,1158676480,926280283

Nov  6 01:00:59.593: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-7321 PodName:iperf2-clients-wwfvn ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:00:59.593: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:00:59.747: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Nov  6 01:00:59.747: INFO: iperf version: 
Nov  6 01:00:59.747: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-wwfvn (node node2)
Nov  6 01:00:59.749: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-7321 PodName:iperf2-clients-wwfvn ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:00:59.750: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:01:15.278: INFO: Exec stderr: ""
Nov  6 01:01:15.278: INFO: output from exec on client pod iperf2-clients-wwfvn (node node2): 
20211106010101.173,10.244.4.251,41840,10.233.61.250,6789,3,0.0-1.0,2665742336,21325938688
20211106010102.159,10.244.4.251,41840,10.233.61.250,6789,3,1.0-2.0,2182479872,17459838976
20211106010103.166,10.244.4.251,41840,10.233.61.250,6789,3,2.0-3.0,2390228992,19121831936
20211106010104.175,10.244.4.251,41840,10.233.61.250,6789,3,3.0-4.0,3001286656,24010293248
20211106010105.162,10.244.4.251,41840,10.233.61.250,6789,3,4.0-5.0,2996699136,23973593088
20211106010106.174,10.244.4.251,41840,10.233.61.250,6789,3,5.0-6.0,3040739328,24325914624
20211106010107.161,10.244.4.251,41840,10.233.61.250,6789,3,6.0-7.0,2503868416,20030947328
20211106010108.168,10.244.4.251,41840,10.233.61.250,6789,3,7.0-8.0,2638741504,21109932032
20211106010109.175,10.244.4.251,41840,10.233.61.250,6789,3,8.0-9.0,3004825600,24038604800
20211106010110.162,10.244.4.251,41840,10.233.61.250,6789,3,9.0-10.0,3206283264,25650266112
20211106010110.162,10.244.4.251,41840,10.233.61.250,6789,3,0.0-10.0,27630895104,22104170110

Nov  6 01:01:15.279: INFO:                                From                                 To    Bandwidth (MB/s)
Nov  6 01:01:15.279: INFO:                               node1                              node2                 110
Nov  6 01:01:15.279: INFO:                               node2                              node2                2635
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:15.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-7321" for this suite.


• [SLOW TEST:47.032 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":1,"skipped":127,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:15.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov  6 01:01:15.320: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:15.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-2840" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:54.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-9645
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:54.963: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:54.998: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:57.001: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:59.002: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:01.001: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:03.001: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:05.002: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:07.002: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:09.006: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:11.000: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:13.002: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:15.003: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:17.001: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:01:17.006: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:23.025: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:23.025: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:23.032: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:23.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9645" for this suite.


S [SKIPPING] [28.187 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:23.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-7d6722b6-0d95-4941-9893-9f705504126f]
STEP: Verifying pods for RC slow-terminating-unready-pod
Nov  6 01:01:23.314: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Nov  6 01:01:27.335: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-9ph84]: "NOW: 2021-11-06 01:01:27.333227174 +0000 UTC m=+1.954055899", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-1394.svc.cluster.local
Nov  6 01:01:27.335: INFO: Creating new exec pod
Nov  6 01:01:31.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1394 exec execpod-25ttl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/'
Nov  6 01:01:31.607: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/\n"
Nov  6 01:01:31.607: INFO: stdout: "NOW: 2021-11-06 01:01:31.600268388 +0000 UTC m=+6.221097115"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-1394 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Nov  6 01:01:36.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1394 exec execpod-25ttl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/; test "$?" -ne "0"'
Nov  6 01:01:37.910: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Nov  6 01:01:37.911: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Nov  6 01:01:37.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1394 exec execpod-25ttl -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/'
Nov  6 01:01:38.202: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1394.svc.cluster.local:80/\n"
Nov  6 01:01:38.202: INFO: stdout: "NOW: 2021-11-06 01:01:38.194876377 +0000 UTC m=+12.815705122"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-1394
STEP: deleting service tolerate-unready in namespace services-1394
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:38.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1394" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:14.960 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":422,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:55.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-8840
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:00:55.256: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:00:55.292: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:57.296: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:59.298: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:01.296: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:03.296: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:05.298: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:07.297: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:09.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:11.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:13.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:15.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:17.296: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:01:17.302: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:39.336: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:39.336: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:39.343: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:39.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8840" for this suite.


S [SKIPPING] [44.216 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:39.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov  6 01:01:39.680: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-2298" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:07.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-6593
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:07.607: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:07.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:09.641: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:11.642: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:13.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:15.641: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:17.641: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:19.641: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:21.641: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:23.641: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:25.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:27.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:29.643: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:01:29.647: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:31.650: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:33.651: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:35.651: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:37.651: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:43.675: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:43.675: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:43.682: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:43.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6593" for this suite.


S [SKIPPING] [36.197 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:43.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Nov  6 01:01:43.898: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Nov  6 01:01:44.002: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:44.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-9174" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.144 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:10.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-9994
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:10.676: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:10.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:12.710: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:14.711: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:16.710: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:18.714: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:20.711: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:22.709: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:24.711: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:26.711: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:28.709: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:30.713: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:32.710: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:01:32.715: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:34.719: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:44.740: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:44.740: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:44.747: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:44.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9994" for this suite.


S [SKIPPING] [34.214 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:28.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W1106 01:00:28.322408      28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Nov  6 01:00:28.322: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Nov  6 01:00:28.324: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-7782
STEP: creating a client pod for probing the service svc-udp
Nov  6 01:00:28.350: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:30.354: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:32.356: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:34.356: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:36.354: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:38.355: INFO: The status of Pod pod-client is Running (Ready = true)
Nov  6 01:00:38.408: INFO: Pod client logs: Sat Nov  6 01:00:33 UTC 2021
Sat Nov  6 01:00:33 UTC 2021 Try: 1

Sat Nov  6 01:00:33 UTC 2021 Try: 2

Sat Nov  6 01:00:33 UTC 2021 Try: 3

Sat Nov  6 01:00:33 UTC 2021 Try: 4

Sat Nov  6 01:00:33 UTC 2021 Try: 5

Sat Nov  6 01:00:33 UTC 2021 Try: 6

Sat Nov  6 01:00:33 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov  6 01:00:38.421: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:40.427: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:42.424: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:44.455: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7782 to expose endpoints map[pod-server-1:[80]]
Nov  6 01:00:44.466: INFO: successfully validated that service svc-udp in namespace conntrack-7782 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Nov  6 01:01:44.494: INFO: Pod client logs: Sat Nov  6 01:00:33 UTC 2021
Sat Nov  6 01:00:33 UTC 2021 Try: 1

Sat Nov  6 01:00:33 UTC 2021 Try: 2

Sat Nov  6 01:00:33 UTC 2021 Try: 3

Sat Nov  6 01:00:33 UTC 2021 Try: 4

Sat Nov  6 01:00:33 UTC 2021 Try: 5

Sat Nov  6 01:00:33 UTC 2021 Try: 6

Sat Nov  6 01:00:33 UTC 2021 Try: 7

Sat Nov  6 01:00:38 UTC 2021 Try: 8

Sat Nov  6 01:00:38 UTC 2021 Try: 9

Sat Nov  6 01:00:38 UTC 2021 Try: 10

Sat Nov  6 01:00:38 UTC 2021 Try: 11

Sat Nov  6 01:00:38 UTC 2021 Try: 12

Sat Nov  6 01:00:38 UTC 2021 Try: 13

Sat Nov  6 01:00:43 UTC 2021 Try: 14

Sat Nov  6 01:00:43 UTC 2021 Try: 15

Sat Nov  6 01:00:43 UTC 2021 Try: 16

Sat Nov  6 01:00:43 UTC 2021 Try: 17

Sat Nov  6 01:00:43 UTC 2021 Try: 18

Sat Nov  6 01:00:43 UTC 2021 Try: 19

Sat Nov  6 01:00:48 UTC 2021 Try: 20

Sat Nov  6 01:00:48 UTC 2021 Try: 21

Sat Nov  6 01:00:48 UTC 2021 Try: 22

Sat Nov  6 01:00:48 UTC 2021 Try: 23

Sat Nov  6 01:00:48 UTC 2021 Try: 24

Sat Nov  6 01:00:48 UTC 2021 Try: 25

Sat Nov  6 01:00:53 UTC 2021 Try: 26

Sat Nov  6 01:00:53 UTC 2021 Try: 27

Sat Nov  6 01:00:53 UTC 2021 Try: 28

Sat Nov  6 01:00:53 UTC 2021 Try: 29

Sat Nov  6 01:00:53 UTC 2021 Try: 30

Sat Nov  6 01:00:53 UTC 2021 Try: 31

Sat Nov  6 01:00:58 UTC 2021 Try: 32

Sat Nov  6 01:00:58 UTC 2021 Try: 33

Sat Nov  6 01:00:58 UTC 2021 Try: 34

Sat Nov  6 01:00:58 UTC 2021 Try: 35

Sat Nov  6 01:00:58 UTC 2021 Try: 36

Sat Nov  6 01:00:58 UTC 2021 Try: 37

Sat Nov  6 01:01:03 UTC 2021 Try: 38

Sat Nov  6 01:01:03 UTC 2021 Try: 39

Sat Nov  6 01:01:03 UTC 2021 Try: 40

Sat Nov  6 01:01:03 UTC 2021 Try: 41

Sat Nov  6 01:01:03 UTC 2021 Try: 42

Sat Nov  6 01:01:03 UTC 2021 Try: 43

Sat Nov  6 01:01:08 UTC 2021 Try: 44

Sat Nov  6 01:01:08 UTC 2021 Try: 45

Sat Nov  6 01:01:08 UTC 2021 Try: 46

Sat Nov  6 01:01:08 UTC 2021 Try: 47

Sat Nov  6 01:01:08 UTC 2021 Try: 48

Sat Nov  6 01:01:08 UTC 2021 Try: 49

Sat Nov  6 01:01:13 UTC 2021 Try: 50

Sat Nov  6 01:01:13 UTC 2021 Try: 51

Sat Nov  6 01:01:13 UTC 2021 Try: 52

Sat Nov  6 01:01:13 UTC 2021 Try: 53

Sat Nov  6 01:01:13 UTC 2021 Try: 54

Sat Nov  6 01:01:13 UTC 2021 Try: 55

Sat Nov  6 01:01:18 UTC 2021 Try: 56

Sat Nov  6 01:01:18 UTC 2021 Try: 57

Sat Nov  6 01:01:18 UTC 2021 Try: 58

Sat Nov  6 01:01:18 UTC 2021 Try: 59

Sat Nov  6 01:01:18 UTC 2021 Try: 60

Sat Nov  6 01:01:18 UTC 2021 Try: 61

Sat Nov  6 01:01:23 UTC 2021 Try: 62

Sat Nov  6 01:01:23 UTC 2021 Try: 63

Sat Nov  6 01:01:23 UTC 2021 Try: 64

Sat Nov  6 01:01:23 UTC 2021 Try: 65

Sat Nov  6 01:01:23 UTC 2021 Try: 66

Sat Nov  6 01:01:23 UTC 2021 Try: 67

Sat Nov  6 01:01:28 UTC 2021 Try: 68

Sat Nov  6 01:01:28 UTC 2021 Try: 69

Sat Nov  6 01:01:28 UTC 2021 Try: 70

Sat Nov  6 01:01:28 UTC 2021 Try: 71

Sat Nov  6 01:01:28 UTC 2021 Try: 72

Sat Nov  6 01:01:28 UTC 2021 Try: 73

Sat Nov  6 01:01:33 UTC 2021 Try: 74

Sat Nov  6 01:01:33 UTC 2021 Try: 75

Sat Nov  6 01:01:33 UTC 2021 Try: 76

Sat Nov  6 01:01:33 UTC 2021 Try: 77

Sat Nov  6 01:01:33 UTC 2021 Try: 78

Sat Nov  6 01:01:33 UTC 2021 Try: 79

Sat Nov  6 01:01:38 UTC 2021 Try: 80

Sat Nov  6 01:01:38 UTC 2021 Try: 81

Sat Nov  6 01:01:38 UTC 2021 Try: 82

Sat Nov  6 01:01:38 UTC 2021 Try: 83

Sat Nov  6 01:01:38 UTC 2021 Try: 84

Sat Nov  6 01:01:38 UTC 2021 Try: 85

Sat Nov  6 01:01:43 UTC 2021 Try: 86

Sat Nov  6 01:01:43 UTC 2021 Try: 87

Sat Nov  6 01:01:43 UTC 2021 Try: 88

Sat Nov  6 01:01:43 UTC 2021 Try: 89

Sat Nov  6 01:01:43 UTC 2021 Try: 90

Sat Nov  6 01:01:43 UTC 2021 Try: 91

Nov  6 01:01:44.495: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000702a80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000702a80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000702a80, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-7782".
STEP: Found 8 events.
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:31 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:32 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 380.35456ms
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:32 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:33 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:39 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:40 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 320.167181ms
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:40 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Nov  6 01:01:44.499: INFO: At 2021-11-06 01:00:40 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Nov  6 01:01:44.501: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Nov  6 01:01:44.501: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:28 +0000 UTC  }]
Nov  6 01:01:44.501: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:00:38 +0000 UTC  }]
Nov  6 01:01:44.501: INFO: 
Nov  6 01:01:44.507: INFO: 
Logging node info for node master1
Nov  6 01:01:44.510: INFO: Node Info: &Node{ObjectMeta:{master1    acabf68f-e6fa-4376-87a7-953399a106b3 79832 0 2021-11-05 20:58:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:01:37 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:01:44.513: INFO: 
Logging kubelet events for node master1
Nov  6 01:01:44.515: INFO: 
Logging pods the kubelet thinks is on node master1
Nov  6 01:01:44.553: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:01:44.553: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:01:44.553: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container docker-registry ready: true, restart count 0
Nov  6 01:01:44.553: INFO: 	Container nginx ready: true, restart count 0
Nov  6 01:01:44.553: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:01:44.553: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov  6 01:01:44.553: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov  6 01:01:44.553: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:01:44.553: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:01:44.553: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container coredns ready: true, restart count 2
Nov  6 01:01:44.553: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.553: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:44.553: INFO: 	Container node-exporter ready: true, restart count 0
W1106 01:01:44.578602      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:01:44.649: INFO: 
Latency metrics for node master1
Nov  6 01:01:44.649: INFO: 
Logging node info for node master2
Nov  6 01:01:44.654: INFO: Node Info: &Node{ObjectMeta:{master2    004d4571-8588-4d18-93d0-ad0af4174866 79891 0 2021-11-05 20:59:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:01:39 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:01:44.654: INFO: 
Logging kubelet events for node master2
Nov  6 01:01:44.656: INFO: 
Logging pods the kubelet thinks is on node master2
Nov  6 01:01:44.677: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:01:44.677: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:01:44.677: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:01:44.677: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container nfd-controller ready: true, restart count 0
Nov  6 01:01:44.677: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:44.677: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:01:44.677: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:01:44.677: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:01:44.677: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:01:44.677: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:01:44.677: INFO: 	Container kube-flannel ready: true, restart count 3
W1106 01:01:44.691883      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:01:44.760: INFO: 
Latency metrics for node master2
Nov  6 01:01:44.760: INFO: 
Logging node info for node master3
Nov  6 01:01:44.762: INFO: Node Info: &Node{ObjectMeta:{master3    d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 80008 0 2021-11-05 20:59:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:01:44.763: INFO: 
Logging kubelet events for node master3
Nov  6 01:01:44.768: INFO: 
Logging pods the kubelet thinks is on node master3
Nov  6 01:01:44.794: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.794: INFO: 	Container autoscaler ready: true, restart count 1
Nov  6 01:01:44.794: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.794: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:01:44.794: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:01:44.795: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:01:44.795: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container coredns ready: true, restart count 1
Nov  6 01:01:44.795: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:44.795: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:01:44.795: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:01:44.795: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:01:44.795: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:01:44.795: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:01:44.795: INFO: 	Container kube-flannel ready: true, restart count 1
W1106 01:01:44.808938      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:01:44.875: INFO: 
Latency metrics for node master3
Nov  6 01:01:44.876: INFO: 
Logging node info for node node1
Nov  6 01:01:44.878: INFO: Node Info: &Node{ObjectMeta:{node1    290b18e7-da33-4da8-b78a-8a7f28c49abf 79817 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:53:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:01:35 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:01:44.879: INFO: 
Logging kubelet events for node node1
Nov  6 01:01:44.881: INFO: 
Logging pods the kubelet thinks is on node node1
Nov  6 01:01:44.901: INFO: netserver-0 started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container webserver ready: false, restart count 0
Nov  6 01:01:44.902: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:01:44.902: INFO: 	Container kube-flannel ready: true, restart count 3
Nov  6 01:01:44.902: INFO: nodeport-update-service-n26rr started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:01:44.902: INFO: e2e-net-exec started at 2021-11-06 01:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container e2e-net-exec ready: false, restart count 0
Nov  6 01:01:44.902: INFO: service-headless-toggled-z87xd started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container service-headless-toggled ready: false, restart count 0
Nov  6 01:01:44.902: INFO: netserver-0 started at 2021-11-06 01:01:07 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:44.902: INFO: pod-client started at 2021-11-06 01:00:54 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container pod-client ready: true, restart count 0
Nov  6 01:01:44.902: INFO: service-headless-toggled-ldkb7 started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container service-headless-toggled ready: false, restart count 0
Nov  6 01:01:44.902: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:01:44.902: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:01:44.902: INFO: 	Container init ready: false, restart count 0
Nov  6 01:01:44.902: INFO: 	Container install ready: false, restart count 0
Nov  6 01:01:44.902: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:01:44.902: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container tas-extender ready: true, restart count 0
Nov  6 01:01:44.902: INFO: execpodwsgzw started at 2021-11-06 01:01:33 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:01:44.902: INFO: pod-client started at 2021-11-06 01:00:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container pod-client ready: true, restart count 0
Nov  6 01:01:44.902: INFO: netserver-0 started at 2021-11-06 01:01:10 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:44.902: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:01:44.902: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:01:44.902: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container config-reloader ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container grafana ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container prometheus ready: true, restart count 1
Nov  6 01:01:44.902: INFO: service-headless-toggled-gqnz7 started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container service-headless-toggled ready: false, restart count 0
Nov  6 01:01:44.902: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:01:44.902: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov  6 01:01:44.902: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov  6 01:01:44.902: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:01:44.902: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov  6 01:01:44.902: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:01:44.902: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov  6 01:01:44.902: INFO: service-proxy-disabled-cpjvq started at 2021-11-06 01:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container service-proxy-disabled ready: false, restart count 0
Nov  6 01:01:44.902: INFO: test-container-pod started at 2021-11-06 01:01:37 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:44.902: INFO: service-proxy-disabled-gnjq2 started at 2021-11-06 01:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container service-proxy-disabled ready: false, restart count 0
Nov  6 01:01:44.902: INFO: netserver-0 started at 2021-11-06 01:01:13 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:44.902: INFO: 	Container webserver ready: true, restart count 0
W1106 01:01:44.915870      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:01:46.291: INFO: 
Latency metrics for node node1
Nov  6 01:01:46.291: INFO: 
Logging node info for node node2
Nov  6 01:01:46.294: INFO: Node Info: &Node{ObjectMeta:{node2    7d7e71f0-82d7-49ba-b69a-56600dd59b3f 79968 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-06 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:01:43 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:01:46.294: INFO: 
Logging kubelet events for node node2
Nov  6 01:01:46.297: INFO: 
Logging pods the kubelet thinks is on node node2
Nov  6 01:01:46.534: INFO: netserver-1 started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: false, restart count 0
Nov  6 01:01:46.534: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:01:46.534: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:01:46.534: INFO: pod-server-1 started at 2021-11-06 01:01:04 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:01:46.534: INFO: boom-server started at 2021-11-06 01:01:38 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container boom-server ready: false, restart count 0
Nov  6 01:01:46.534: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:01:46.534: INFO: 	Container init ready: false, restart count 0
Nov  6 01:01:46.534: INFO: 	Container install ready: false, restart count 0
Nov  6 01:01:46.534: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:46.534: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:01:46.534: INFO: netserver-1 started at 2021-11-06 01:01:13 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:46.534: INFO: test-container-pod started at 2021-11-06 01:01:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:46.534: INFO: test-container-pod started at 2021-11-06 01:01:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: false, restart count 0
Nov  6 01:01:46.534: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov  6 01:01:46.534: INFO: netserver-1 started at 2021-11-06 01:01:10 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:46.534: INFO: pod-server-2 started at  (0+0 container statuses recorded)
Nov  6 01:01:46.534: INFO: nodeport-update-service-jqdx5 started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:01:46.534: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Init container install-cni ready: true, restart count 1
Nov  6 01:01:46.534: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:01:46.534: INFO: service-headless-84q9g started at 2021-11-06 01:01:06 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container service-headless ready: true, restart count 0
Nov  6 01:01:46.534: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:01:46.534: INFO: service-headless-kx58d started at 2021-11-06 01:01:06 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container service-headless ready: true, restart count 0
Nov  6 01:01:46.534: INFO: netserver-1 started at 2021-11-06 01:01:07 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.534: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:01:46.535: INFO: execpodsdt72 started at 2021-11-06 01:01:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:01:46.535: INFO: netserver-1 started at  (0+0 container statuses recorded)
Nov  6 01:01:46.535: INFO: pod-server-1 started at 2021-11-06 01:00:38 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:01:46.535: INFO: service-headless-7q9zf started at 2021-11-06 01:01:06 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container service-headless ready: true, restart count 0
Nov  6 01:01:46.535: INFO: externalip-test-pq7pg started at 2021-11-06 01:01:07 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container externalip-test ready: true, restart count 0
Nov  6 01:01:46.535: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:01:46.535: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov  6 01:01:46.535: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:01:46.535: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:01:46.535: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov  6 01:01:46.535: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:01:46.535: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:01:46.535: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:01:46.535: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov  6 01:01:46.535: INFO: externalip-test-tsxkh started at 2021-11-06 01:01:07 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:01:46.535: INFO: 	Container externalip-test ready: true, restart count 0
Nov  6 01:01:46.535: INFO: service-proxy-disabled-jpmpq started at  (0+0 container statuses recorded)
W1106 01:01:46.552763      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:01:47.800: INFO: 
Latency metrics for node node2
Nov  6 01:01:47.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7782" for this suite.


• Failure [79.509 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Nov  6 01:01:44.495: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":0,"skipped":145,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:47.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:48.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4048" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":231,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:07.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-8881
STEP: creating replication controller externalip-test in namespace services-8881
I1106 01:01:07.281337      27 runners.go:190] Created replication controller with name: externalip-test, namespace: services-8881, replica count: 2
I1106 01:01:10.332615      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:13.333083      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:16.334244      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:19.335483      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:22.339352      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:25.342102      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:28.343018      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:31.343209      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:34.344206      27 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov  6 01:01:34.344: INFO: Creating new exec pod
Nov  6 01:01:43.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov  6 01:01:43.665: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:43.665: INFO: stdout: ""
Nov  6 01:01:44.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov  6 01:01:45.378: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:45.378: INFO: stdout: ""
Nov  6 01:01:45.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov  6 01:01:46.228: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:46.228: INFO: stdout: ""
Nov  6 01:01:46.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Nov  6 01:01:47.490: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:47.490: INFO: stdout: "externalip-test-tsxkh"
Nov  6 01:01:47.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.15 80'
Nov  6 01:01:48.204: INFO: stderr: "+ nc -v -t -w 2 10.233.34.15 80\nConnection to 10.233.34.15 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Nov  6 01:01:48.204: INFO: stdout: "externalip-test-pq7pg"
Nov  6 01:01:48.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8881 exec execpodsdt72 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Nov  6 01:01:48.453: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Nov  6 01:01:48.453: INFO: stdout: "externalip-test-tsxkh"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:48.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8881" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:41.210 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":5,"skipped":419,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:49.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Nov  6 01:01:49.069: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:49.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4268" for this suite.


S [SKIPPING] [0.029 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:13.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-4721
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:13.270: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:13.342: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:15.348: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:17.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:19.346: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:21.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:23.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:25.346: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:27.344: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:29.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:31.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:33.344: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:35.347: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:01:35.352: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:37.357: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:39.356: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:01:51.379: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:01:51.379: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:51.386: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:01:51.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4721" for this suite.


S [SKIPPING] [38.244 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:39.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-8749
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:39.819: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:39.848: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:41.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:43.853: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:45.852: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:47.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:49.853: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:51.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:53.851: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:55.853: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:57.852: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:59.854: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:01.853: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:01.857: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:05.879: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:05.879: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:05.886: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8749" for this suite.


S [SKIPPING] [26.194 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:00:54.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-2764
STEP: creating a client pod for probing the service svc-udp
Nov  6 01:00:54.733: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:56.737: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:00:58.738: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:00.736: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:02.736: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:04.739: INFO: The status of Pod pod-client is Running (Ready = true)
Nov  6 01:01:04.892: INFO: Pod client logs: Sat Nov  6 01:00:58 UTC 2021
Sat Nov  6 01:00:58 UTC 2021 Try: 1

Sat Nov  6 01:00:58 UTC 2021 Try: 2

Sat Nov  6 01:00:58 UTC 2021 Try: 3

Sat Nov  6 01:00:58 UTC 2021 Try: 4

Sat Nov  6 01:00:58 UTC 2021 Try: 5

Sat Nov  6 01:00:58 UTC 2021 Try: 6

Sat Nov  6 01:00:58 UTC 2021 Try: 7

Sat Nov  6 01:01:03 UTC 2021 Try: 8

Sat Nov  6 01:01:03 UTC 2021 Try: 9

Sat Nov  6 01:01:03 UTC 2021 Try: 10

Sat Nov  6 01:01:03 UTC 2021 Try: 11

Sat Nov  6 01:01:03 UTC 2021 Try: 12

Sat Nov  6 01:01:03 UTC 2021 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov  6 01:01:04.903: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:06.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:08.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:10.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:12.907: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:14.908: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:16.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:18.908: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:20.907: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:22.907: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:24.907: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:26.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:28.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:30.906: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:32.907: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:34.909: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:36.908: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:38.907: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-2764 to expose endpoints map[pod-server-1:[80]]
Nov  6 01:01:38.917: INFO: successfully validated that service svc-udp in namespace conntrack-2764 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Nov  6 01:01:43.950: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:45.954: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:47.953: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:49.954: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:51.954: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:53.954: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:55.953: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Nov  6 01:01:55.956: INFO: Cleaning up pod-server-1 pod
Nov  6 01:01:55.962: INFO: Waiting for pod pod-server-1 to disappear
Nov  6 01:01:55.965: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-2764 to expose endpoints map[pod-server-2:[80]]
Nov  6 01:01:55.971: INFO: successfully validated that service svc-udp in namespace conntrack-2764 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:05.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-2764" for this suite.


• [SLOW TEST:71.316 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":193,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:51.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Nov  6 01:01:51.653: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:53.656: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:55.659: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:57.656: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:59.659: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Nov  6 01:01:59.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4629 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Nov  6 01:01:59.895: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Nov  6 01:01:59.895: INFO: stdout: "iptables"
Nov  6 01:01:59.895: INFO: proxyMode: iptables
Nov  6 01:01:59.900: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Nov  6 01:01:59.903: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-4629
Nov  6 01:01:59.909: INFO: sourceip-test cluster ip: 10.233.43.65
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Nov  6 01:01:59.925: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:01.928: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:03.930: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-4629 to expose endpoints map[echo-sourceip:[8080]]
Nov  6 01:02:03.939: INFO: successfully validated that service sourceip-test in namespace services-4629 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Nov  6 01:02:03.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Nov  6 01:02:05.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-59bcc94b4b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:02:07.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757326, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-59bcc94b4b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:02:09.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757326, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-59bcc94b4b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:02:11.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757326, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771757323, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-59bcc94b4b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov  6 01:02:13.956: INFO: Waiting up to 2m0s to get response from 10.233.43.65:8080
Nov  6 01:02:13.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4629 exec pause-pod-59bcc94b4b-mvg88 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.43.65:8080/clientip'
Nov  6 01:02:14.333: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.43.65:8080/clientip\n"
Nov  6 01:02:14.333: INFO: stdout: "10.244.3.204:35072"
STEP: Verifying the preserved source ip
Nov  6 01:02:14.333: INFO: Waiting up to 2m0s to get response from 10.233.43.65:8080
Nov  6 01:02:14.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4629 exec pause-pod-59bcc94b4b-tnlbb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.43.65:8080/clientip'
Nov  6 01:02:14.691: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.43.65:8080/clientip\n"
Nov  6 01:02:14.691: INFO: stdout: "10.244.4.35:54184"
STEP: Verifying the preserved source ip
Nov  6 01:02:14.691: INFO: Deleting deployment
Nov  6 01:02:14.697: INFO: Cleaning up the echo server pod
Nov  6 01:02:14.702: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4629" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:23.101 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":515,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:44.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-9840
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:44.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:44.938: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:46.942: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:48.940: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:50.942: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:52.942: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:54.943: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:56.941: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:01:58.942: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:00.941: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:02.941: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:04.942: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:06.942: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:06.947: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:14.986: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:14.986: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:15.001: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:15.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9840" for this suite.


S [SKIPPING] [30.226 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:49.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-6969
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:49.333: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:49.365: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:51.369: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:53.370: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:55.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:57.370: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:59.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:01.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:03.369: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:05.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:07.372: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:09.370: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:09.374: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:02:11.377: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:02:13.379: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:19.400: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:19.400: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:19.407: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:19.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6969" for this suite.


S [SKIPPING] [30.200 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:15.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-7177
STEP: deleting original service nodeport-reuse
Nov  6 01:02:15.670: INFO: Creating new host exec pod
Nov  6 01:02:15.683: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:17.686: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:19.687: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:21.687: INFO: The status of Pod hostexec is Running (Ready = true)
Nov  6 01:02:21.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7177 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :32728' | tail -n +2 | grep LISTEN'
Nov  6 01:02:22.068: INFO: stderr: "+ ss -ant46 'sport = :32728'\n+ tail -n +2\n+ grep LISTEN\n"
Nov  6 01:02:22.068: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 32728
STEP: deleting service nodeport-reuse in namespace services-7177
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:22.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7177" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:6.472 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":3,"skipped":604,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:19.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Nov  6 01:02:19.608: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-8849" to be "Succeeded or Failed"
Nov  6 01:02:19.612: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673555ms
Nov  6 01:02:21.616: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007271683s
Nov  6 01:02:23.620: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012021941s
Nov  6 01:02:25.625: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016359807s
STEP: Saw pod success
Nov  6 01:02:25.625: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:25.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8849" for this suite.


• [SLOW TEST:6.152 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":2,"skipped":722,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Nov  6 01:02:25.684: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:06.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-9856
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:02:06.234: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:06.274: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:08.281: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:10.278: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:12.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:14.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:16.276: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:18.276: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:20.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:22.277: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:24.279: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:26.277: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:26.282: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:02:28.287: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:34.308: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:34.308: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:34.315: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:34.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9856" for this suite.


S [SKIPPING] [28.198 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov  6 01:02:34.327: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:06.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-3478
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:02:06.252: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:06.291: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:08.294: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:10.297: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:12.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:14.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:16.293: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:18.294: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:20.294: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:22.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:24.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:26.294: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:28.295: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:28.299: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:34.319: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:34.319: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:34.326: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:34.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3478" for this suite.


S [SKIPPING] [28.220 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov  6 01:02:34.336: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:06.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-1773
STEP: creating service service-headless in namespace services-1773
STEP: creating replication controller service-headless in namespace services-1773
I1106 01:01:06.430759      30 runners.go:190] Created replication controller with name: service-headless, namespace: services-1773, replica count: 3
I1106 01:01:09.484022      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:12.485199      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:15.486706      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:18.489758      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:21.490302      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:24.492126      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:27.493227      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:30.499145      30 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:33.499440      30 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:36.500815      30 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:39.501610      30 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-1773
STEP: creating service service-headless-toggled in namespace services-1773
STEP: creating replication controller service-headless-toggled in namespace services-1773
I1106 01:01:39.514357      30 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-1773, replica count: 3
I1106 01:01:42.565048      30 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:45.565827      30 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:48.566658      30 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Nov  6 01:01:48.569: INFO: Creating new host exec pod
Nov  6 01:01:48.583: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:50.588: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:52.588: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:54.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:56.590: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:58.587: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:01:58.587: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:04.611: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done" in pod services-1773/verify-service-up-host-exec-pod
Nov  6 01:02:04.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done'
Nov  6 01:02:05.043: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n"
Nov  6 01:02:05.043: INFO: stdout: "service-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\n"
Nov  6 01:02:05.043: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done" in pod services-1773/verify-service-up-exec-pod-xg5qx
Nov  6 01:02:05.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-up-exec-pod-xg5qx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done'
Nov  6 01:02:05.877: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n"
Nov  6 01:02:05.877: INFO: stdout: "service-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1773
STEP: Deleting pod verify-service-up-exec-pod-xg5qx in namespace services-1773
STEP: verifying service-headless is not up
Nov  6 01:02:05.891: INFO: Creating new host exec pod
Nov  6 01:02:05.903: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:07.905: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:09.907: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:09.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed'
Nov  6 01:02:12.179: INFO: rc: 28
Nov  6 01:02:12.179: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed" in pod services-1773/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.59.120:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1773
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Nov  6 01:02:12.195: INFO: Creating new host exec pod
Nov  6 01:02:12.209: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:14.212: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:16.212: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:18.213: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:18.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.33.7:80 && echo service-down-failed'
Nov  6 01:02:20.556: INFO: rc: 28
Nov  6 01:02:20.556: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.33.7:80 && echo service-down-failed" in pod services-1773/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.33.7:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.33.7:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1773
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Nov  6 01:02:20.568: INFO: Creating new host exec pod
Nov  6 01:02:20.582: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:22.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:24.587: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:02:24.587: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:30.611: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done" in pod services-1773/verify-service-up-host-exec-pod
Nov  6 01:02:30.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done'
Nov  6 01:02:31.219: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n"
Nov  6 01:02:31.220: INFO: stdout: "service-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\n"
Nov  6 01:02:31.220: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done" in pod services-1773/verify-service-up-exec-pod-8kc4h
Nov  6 01:02:31.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-up-exec-pod-8kc4h -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.7:80 2>&1 || true; echo; done'
Nov  6 01:02:31.689: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.7:80\n+ echo\n"
Nov  6 01:02:31.689: INFO: stdout: "service-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-gqnz7\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-ldkb7\nservice-headless-toggled-z87xd\nservice-headless-toggled-z87xd\nservice-headless-toggled-gqnz7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1773
STEP: Deleting pod verify-service-up-exec-pod-8kc4h in namespace services-1773
STEP: verifying service-headless is still not up
Nov  6 01:02:31.702: INFO: Creating new host exec pod
Nov  6 01:02:31.716: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:33.719: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:33.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed'
Nov  6 01:02:36.010: INFO: rc: 28
Nov  6 01:02:36.010: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed" in pod services-1773/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1773 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.59.120:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.59.120:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1773
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:36.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1773" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:89.626 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":57,"failed":0}
Nov  6 01:02:36.028: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:14.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-4444
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:02:15.029: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:15.060: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:17.064: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:19.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:21.064: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:23.063: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:25.064: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:27.063: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:29.068: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:31.063: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:33.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:35.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:37.064: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:37.070: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:43.091: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:43.091: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:02:43.100: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:43.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4444" for this suite.


S [SKIPPING] [28.199 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov  6 01:02:43.121: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:44.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-6095
STEP: creating service service-proxy-disabled in namespace services-6095
STEP: creating replication controller service-proxy-disabled in namespace services-6095
I1106 01:01:44.238883      37 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-6095, replica count: 3
I1106 01:01:47.290800      37 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:50.291295      37 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:53.292150      37 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:56.292915      37 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-6095
STEP: creating service service-proxy-toggled in namespace services-6095
STEP: creating replication controller service-proxy-toggled in namespace services-6095
I1106 01:01:56.305399      37 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-6095, replica count: 3
I1106 01:01:59.357049      37 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:02:02.358495      37 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:02:05.359912      37 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Nov  6 01:02:05.362: INFO: Creating new host exec pod
Nov  6 01:02:05.376: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:07.379: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:09.380: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:02:09.380: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:17.398: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done" in pod services-6095/verify-service-up-host-exec-pod
Nov  6 01:02:17.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done'
Nov  6 01:02:17.812: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n"
Nov  6 01:02:17.812: INFO: stdout: "service-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\n"
Nov  6 01:02:17.812: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done" in pod services-6095/verify-service-up-exec-pod-mw7p2
Nov  6 01:02:17.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-up-exec-pod-mw7p2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done'
Nov  6 01:02:18.232: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n"
Nov  6 01:02:18.233: INFO: stdout: "service-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6095
STEP: Deleting pod verify-service-up-exec-pod-mw7p2 in namespace services-6095
STEP: verifying service-disabled is not up
Nov  6 01:02:18.246: INFO: Creating new host exec pod
Nov  6 01:02:18.264: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:20.268: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:20.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed'
Nov  6 01:02:22.847: INFO: rc: 28
Nov  6 01:02:22.847: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed" in pod services-6095/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.51.167:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6095
STEP: adding service-proxy-name label
STEP: verifying service is not up
Nov  6 01:02:22.859: INFO: Creating new host exec pod
Nov  6 01:02:22.872: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:24.877: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:26.875: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:28.877: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:28.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.89:80 && echo service-down-failed'
Nov  6 01:02:31.136: INFO: rc: 28
Nov  6 01:02:31.136: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.25.89:80 && echo service-down-failed" in pod services-6095/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.25.89:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.25.89:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6095
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Nov  6 01:02:31.148: INFO: Creating new host exec pod
Nov  6 01:02:31.163: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:33.166: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:35.166: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:02:35.166: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:43.180: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done" in pod services-6095/verify-service-up-host-exec-pod
Nov  6 01:02:43.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done'
Nov  6 01:02:43.590: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n"
Nov  6 01:02:43.591: INFO: stdout: "service-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\n"
Nov  6 01:02:43.591: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done" in pod services-6095/verify-service-up-exec-pod-rmr6h
Nov  6 01:02:43.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-up-exec-pod-rmr6h -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.25.89:80 2>&1 || true; echo; done'
Nov  6 01:02:43.990: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.25.89:80\n+ echo\n"
Nov  6 01:02:43.991: INFO: stdout: "service-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-j42qb\nservice-proxy-toggled-4z9rk\nservice-proxy-toggled-2qknb\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6095
STEP: Deleting pod verify-service-up-exec-pod-rmr6h in namespace services-6095
STEP: verifying service-disabled is still not up
Nov  6 01:02:44.002: INFO: Creating new host exec pod
Nov  6 01:02:44.016: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:46.020: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:48.019: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:02:48.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed'
Nov  6 01:02:50.282: INFO: rc: 28
Nov  6 01:02:50.282: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed" in pod services-6095/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6095 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.51.167:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.51.167:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-6095
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6095" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:66.095 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":1270,"failed":0}
Nov  6 01:02:50.302: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:38.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Nov  6 01:01:38.369: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:40.374: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:42.373: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:44.374: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:46.372: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:48.373: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:50.374: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Nov  6 01:01:50.393: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:52.398: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:54.398: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:56.396: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:58.396: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Nov  6 01:02:58.765: INFO: boom-server pod logs: 2021/11/06 01:01:44 external ip: 10.244.4.23
2021/11/06 01:01:44 listen on 0.0.0.0:9000
2021/11/06 01:01:44 probing 10.244.4.23
2021/11/06 01:01:55 tcp packet: &{SrcPort:34725 DestPort:9000 Seq:4213672470 Ack:0 Flags:40962 WindowSize:29200 Checksum:17414 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:01:55 tcp packet: &{SrcPort:34725 DestPort:9000 Seq:4213672471 Ack:2205473811 Flags:32784 WindowSize:229 Checksum:16291 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:01:55 connection established
2021/11/06 01:01:55 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 135 165 131 115 85 115 251 39 138 23 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:01:55 checksumer: &{sum:448356 oddByte:33 length:39}
2021/11/06 01:01:55 ret:  448389
2021/11/06 01:01:55 ret:  55179
2021/11/06 01:01:55 ret:  55179
2021/11/06 01:01:55 boom packet injected
2021/11/06 01:01:55 tcp packet: &{SrcPort:34725 DestPort:9000 Seq:4213672471 Ack:2205473811 Flags:32785 WindowSize:229 Checksum:16290 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:01:57 tcp packet: &{SrcPort:42779 DestPort:9000 Seq:1027380505 Ack:0 Flags:40962 WindowSize:29200 Checksum:53159 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:01:57 tcp packet: &{SrcPort:42779 DestPort:9000 Seq:1027380506 Ack:168736996 Flags:32784 WindowSize:229 Checksum:24587 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:01:57 connection established
2021/11/06 01:01:57 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 167 27 10 13 50 68 61 60 149 26 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:01:57 checksumer: &{sum:380725 oddByte:33 length:39}
2021/11/06 01:01:57 ret:  380758
2021/11/06 01:01:57 ret:  53083
2021/11/06 01:01:57 ret:  53083
2021/11/06 01:01:57 boom packet injected
2021/11/06 01:01:57 tcp packet: &{SrcPort:42779 DestPort:9000 Seq:1027380506 Ack:168736996 Flags:32785 WindowSize:229 Checksum:24586 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:01:59 tcp packet: &{SrcPort:46427 DestPort:9000 Seq:1933495295 Ack:0 Flags:40962 WindowSize:29200 Checksum:19631 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:01:59 tcp packet: &{SrcPort:46427 DestPort:9000 Seq:1933495296 Ack:3693386752 Flags:32784 WindowSize:229 Checksum:8208 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:01:59 connection established
2021/11/06 01:01:59 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 181 91 220 35 21 96 115 62 204 0 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:01:59 checksumer: &{sum:404069 oddByte:33 length:39}
2021/11/06 01:01:59 ret:  404102
2021/11/06 01:01:59 ret:  10892
2021/11/06 01:01:59 ret:  10892
2021/11/06 01:01:59 boom packet injected
2021/11/06 01:01:59 tcp packet: &{SrcPort:46427 DestPort:9000 Seq:1933495296 Ack:3693386752 Flags:32785 WindowSize:229 Checksum:8207 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:01 tcp packet: &{SrcPort:42421 DestPort:9000 Seq:3790530098 Ack:0 Flags:40962 WindowSize:29200 Checksum:52129 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:01 tcp packet: &{SrcPort:42421 DestPort:9000 Seq:3790530099 Ack:3193434401 Flags:32784 WindowSize:229 Checksum:24540 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:01 connection established
2021/11/06 01:02:01 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 165 181 190 86 106 129 225 238 230 51 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:01 checksumer: &{sum:506900 oddByte:33 length:39}
2021/11/06 01:02:01 ret:  506933
2021/11/06 01:02:01 ret:  48188
2021/11/06 01:02:01 ret:  48188
2021/11/06 01:02:02 boom packet injected
2021/11/06 01:02:02 tcp packet: &{SrcPort:42421 DestPort:9000 Seq:3790530099 Ack:3193434401 Flags:32785 WindowSize:229 Checksum:24539 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:04 tcp packet: &{SrcPort:33269 DestPort:9000 Seq:811793774 Ack:0 Flags:40962 WindowSize:29200 Checksum:33249 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:04 tcp packet: &{SrcPort:33269 DestPort:9000 Seq:811793775 Ack:276985316 Flags:32784 WindowSize:229 Checksum:14176 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:04 connection established
2021/11/06 01:02:04 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 129 245 16 128 239 68 48 98 253 111 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:04 checksumer: &{sum:497709 oddByte:33 length:39}
2021/11/06 01:02:04 ret:  497742
2021/11/06 01:02:04 ret:  38997
2021/11/06 01:02:04 ret:  38997
2021/11/06 01:02:04 boom packet injected
2021/11/06 01:02:04 tcp packet: &{SrcPort:33269 DestPort:9000 Seq:811793775 Ack:276985316 Flags:32785 WindowSize:229 Checksum:14175 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:05 tcp packet: &{SrcPort:34725 DestPort:9000 Seq:4213672472 Ack:2205473812 Flags:32784 WindowSize:229 Checksum:61824 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:06 tcp packet: &{SrcPort:43604 DestPort:9000 Seq:3298886644 Ack:0 Flags:40962 WindowSize:29200 Checksum:46829 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:06 tcp packet: &{SrcPort:43604 DestPort:9000 Seq:3298886645 Ack:3164742974 Flags:32784 WindowSize:229 Checksum:2336 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:06 connection established
2021/11/06 01:02:06 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 170 84 188 160 158 158 196 161 3 245 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:06 checksumer: &{sum:538187 oddByte:33 length:39}
2021/11/06 01:02:06 ret:  538220
2021/11/06 01:02:06 ret:  13940
2021/11/06 01:02:06 ret:  13940
2021/11/06 01:02:06 boom packet injected
2021/11/06 01:02:06 tcp packet: &{SrcPort:43604 DestPort:9000 Seq:3298886645 Ack:3164742974 Flags:32785 WindowSize:229 Checksum:2335 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:07 tcp packet: &{SrcPort:42779 DestPort:9000 Seq:1027380507 Ack:168736997 Flags:32784 WindowSize:229 Checksum:4583 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:08 tcp packet: &{SrcPort:43142 DestPort:9000 Seq:2252202447 Ack:0 Flags:40962 WindowSize:29200 Checksum:4467 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:08 tcp packet: &{SrcPort:43142 DestPort:9000 Seq:2252202448 Ack:367257896 Flags:32784 WindowSize:229 Checksum:16042 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:08 connection established
2021/11/06 01:02:08 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 168 134 21 226 98 136 134 61 225 208 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:08 checksumer: &{sum:527110 oddByte:33 length:39}
2021/11/06 01:02:08 ret:  527143
2021/11/06 01:02:08 ret:  2863
2021/11/06 01:02:08 ret:  2863
2021/11/06 01:02:08 boom packet injected
2021/11/06 01:02:08 tcp packet: &{SrcPort:43142 DestPort:9000 Seq:2252202448 Ack:367257896 Flags:32785 WindowSize:229 Checksum:16041 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:09 tcp packet: &{SrcPort:46427 DestPort:9000 Seq:1933495297 Ack:3693386753 Flags:32784 WindowSize:229 Checksum:53739 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:10 tcp packet: &{SrcPort:41109 DestPort:9000 Seq:3294101566 Ack:0 Flags:40962 WindowSize:29200 Checksum:46345 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:10 tcp packet: &{SrcPort:41109 DestPort:9000 Seq:3294101567 Ack:3690083030 Flags:32784 WindowSize:229 Checksum:51891 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:10 connection established
2021/11/06 01:02:10 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 160 149 219 240 172 54 196 88 0 63 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:10 checksumer: &{sum:483435 oddByte:33 length:39}
2021/11/06 01:02:10 ret:  483468
2021/11/06 01:02:10 ret:  24723
2021/11/06 01:02:10 ret:  24723
2021/11/06 01:02:10 boom packet injected
2021/11/06 01:02:10 tcp packet: &{SrcPort:41109 DestPort:9000 Seq:3294101567 Ack:3690083030 Flags:32785 WindowSize:229 Checksum:51890 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:12 tcp packet: &{SrcPort:42421 DestPort:9000 Seq:3790530100 Ack:3193434402 Flags:32784 WindowSize:229 Checksum:4537 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:12 tcp packet: &{SrcPort:34562 DestPort:9000 Seq:3996090048 Ack:0 Flags:40962 WindowSize:29200 Checksum:7795 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:12 tcp packet: &{SrcPort:34562 DestPort:9000 Seq:3996090049 Ack:3472303613 Flags:32784 WindowSize:229 Checksum:17952 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:12 connection established
2021/11/06 01:02:12 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 135 2 206 245 159 93 238 47 126 193 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:12 checksumer: &{sum:479968 oddByte:33 length:39}
2021/11/06 01:02:12 ret:  480001
2021/11/06 01:02:12 ret:  21256
2021/11/06 01:02:12 ret:  21256
2021/11/06 01:02:12 boom packet injected
2021/11/06 01:02:12 tcp packet: &{SrcPort:34562 DestPort:9000 Seq:3996090049 Ack:3472303613 Flags:32785 WindowSize:229 Checksum:17951 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:14 tcp packet: &{SrcPort:33269 DestPort:9000 Seq:811793776 Ack:276985317 Flags:32784 WindowSize:229 Checksum:59707 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:14 tcp packet: &{SrcPort:40580 DestPort:9000 Seq:1516463793 Ack:0 Flags:40962 WindowSize:29200 Checksum:43771 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:14 tcp packet: &{SrcPort:40580 DestPort:9000 Seq:1516463794 Ack:487957692 Flags:32784 WindowSize:229 Checksum:65018 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:14 connection established
2021/11/06 01:02:14 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 158 132 29 20 30 28 90 99 102 178 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:14 checksumer: &{sum:448025 oddByte:33 length:39}
2021/11/06 01:02:14 ret:  448058
2021/11/06 01:02:14 ret:  54848
2021/11/06 01:02:14 ret:  54848
2021/11/06 01:02:14 boom packet injected
2021/11/06 01:02:14 tcp packet: &{SrcPort:40580 DestPort:9000 Seq:1516463794 Ack:487957692 Flags:32785 WindowSize:229 Checksum:65017 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:16 tcp packet: &{SrcPort:43604 DestPort:9000 Seq:3298886646 Ack:3164742975 Flags:32784 WindowSize:229 Checksum:47869 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:16 tcp packet: &{SrcPort:35320 DestPort:9000 Seq:1854820979 Ack:0 Flags:40962 WindowSize:29200 Checksum:47050 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:16 tcp packet: &{SrcPort:35320 DestPort:9000 Seq:1854820980 Ack:4065048458 Flags:32784 WindowSize:229 Checksum:6901 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:16 connection established
2021/11/06 01:02:16 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 137 248 242 74 48 234 110 142 82 116 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:16 checksumer: &{sum:539627 oddByte:33 length:39}
2021/11/06 01:02:16 ret:  539660
2021/11/06 01:02:16 ret:  15380
2021/11/06 01:02:16 ret:  15380
2021/11/06 01:02:16 boom packet injected
2021/11/06 01:02:16 tcp packet: &{SrcPort:35320 DestPort:9000 Seq:1854820980 Ack:4065048458 Flags:32785 WindowSize:229 Checksum:6900 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:18 tcp packet: &{SrcPort:43142 DestPort:9000 Seq:2252202449 Ack:367257897 Flags:32784 WindowSize:229 Checksum:61574 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:18 tcp packet: &{SrcPort:41749 DestPort:9000 Seq:2830151128 Ack:0 Flags:40962 WindowSize:29200 Checksum:341 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:18 tcp packet: &{SrcPort:41749 DestPort:9000 Seq:2830151129 Ack:746133771 Flags:32784 WindowSize:229 Checksum:49408 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:18 connection established
2021/11/06 01:02:18 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 163 21 44 119 146 107 168 176 173 217 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:18 checksumer: &{sum:495158 oddByte:33 length:39}
2021/11/06 01:02:18 ret:  495191
2021/11/06 01:02:18 ret:  36446
2021/11/06 01:02:18 ret:  36446
2021/11/06 01:02:18 boom packet injected
2021/11/06 01:02:18 tcp packet: &{SrcPort:41749 DestPort:9000 Seq:2830151129 Ack:746133771 Flags:32785 WindowSize:229 Checksum:49407 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:20 tcp packet: &{SrcPort:41109 DestPort:9000 Seq:3294101568 Ack:3690083031 Flags:32784 WindowSize:229 Checksum:31888 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:20 tcp packet: &{SrcPort:45781 DestPort:9000 Seq:3563797393 Ack:0 Flags:40962 WindowSize:29200 Checksum:12368 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:20 tcp packet: &{SrcPort:45781 DestPort:9000 Seq:3563797394 Ack:4237219796 Flags:32784 WindowSize:229 Checksum:22860 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:20 connection established
2021/11/06 01:02:20 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 178 213 252 141 81 52 212 107 59 146 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:20 checksumer: &{sum:500110 oddByte:33 length:39}
2021/11/06 01:02:20 ret:  500143
2021/11/06 01:02:20 ret:  41398
2021/11/06 01:02:20 ret:  41398
2021/11/06 01:02:20 boom packet injected
2021/11/06 01:02:20 tcp packet: &{SrcPort:45781 DestPort:9000 Seq:3563797394 Ack:4237219796 Flags:32785 WindowSize:229 Checksum:22859 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:22 tcp packet: &{SrcPort:34562 DestPort:9000 Seq:3996090050 Ack:3472303614 Flags:32784 WindowSize:229 Checksum:63484 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:22 tcp packet: &{SrcPort:43907 DestPort:9000 Seq:3060312362 Ack:0 Flags:40962 WindowSize:29200 Checksum:57403 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:22 tcp packet: &{SrcPort:43907 DestPort:9000 Seq:3060312363 Ack:2440195900 Flags:32784 WindowSize:229 Checksum:54555 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:22 connection established
2021/11/06 01:02:22 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 171 131 145 112 232 156 182 104 169 43 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:22 checksumer: &{sum:471299 oddByte:33 length:39}
2021/11/06 01:02:22 ret:  471332
2021/11/06 01:02:22 ret:  12587
2021/11/06 01:02:22 ret:  12587
2021/11/06 01:02:22 boom packet injected
2021/11/06 01:02:22 tcp packet: &{SrcPort:43907 DestPort:9000 Seq:3060312363 Ack:2440195900 Flags:32785 WindowSize:229 Checksum:54554 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:24 tcp packet: &{SrcPort:40580 DestPort:9000 Seq:1516463795 Ack:487957693 Flags:32784 WindowSize:229 Checksum:45015 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:24 tcp packet: &{SrcPort:44119 DestPort:9000 Seq:3768705231 Ack:0 Flags:40962 WindowSize:29200 Checksum:30136 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:24 tcp packet: &{SrcPort:44119 DestPort:9000 Seq:3768705232 Ack:95763186 Flags:32784 WindowSize:229 Checksum:8912 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:24 connection established
2021/11/06 01:02:24 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 172 87 5 179 180 82 224 161 224 208 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:24 checksumer: &{sum:514981 oddByte:33 length:39}
2021/11/06 01:02:24 ret:  515014
2021/11/06 01:02:24 ret:  56269
2021/11/06 01:02:24 ret:  56269
2021/11/06 01:02:24 boom packet injected
2021/11/06 01:02:24 tcp packet: &{SrcPort:44119 DestPort:9000 Seq:3768705232 Ack:95763186 Flags:32785 WindowSize:229 Checksum:8911 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:26 tcp packet: &{SrcPort:35320 DestPort:9000 Seq:1854820981 Ack:4065048459 Flags:32784 WindowSize:229 Checksum:52433 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:26 tcp packet: &{SrcPort:40711 DestPort:9000 Seq:1678983168 Ack:0 Flags:40962 WindowSize:29200 Checksum:39062 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:26 tcp packet: &{SrcPort:40711 DestPort:9000 Seq:1678983169 Ack:623074764 Flags:32784 WindowSize:229 Checksum:64403 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:26 connection established
2021/11/06 01:02:26 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 159 7 37 33 215 44 100 19 64 1 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:26 checksumer: &{sum:357823 oddByte:33 length:39}
2021/11/06 01:02:26 ret:  357856
2021/11/06 01:02:26 ret:  30181
2021/11/06 01:02:26 ret:  30181
2021/11/06 01:02:26 boom packet injected
2021/11/06 01:02:26 tcp packet: &{SrcPort:40711 DestPort:9000 Seq:1678983169 Ack:623074764 Flags:32785 WindowSize:229 Checksum:64402 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:28 tcp packet: &{SrcPort:41749 DestPort:9000 Seq:2830151130 Ack:746133772 Flags:32784 WindowSize:229 Checksum:29405 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:28 tcp packet: &{SrcPort:40132 DestPort:9000 Seq:3171380660 Ack:0 Flags:40962 WindowSize:29200 Checksum:3168 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:28 tcp packet: &{SrcPort:40132 DestPort:9000 Seq:3171380661 Ack:342990345 Flags:32784 WindowSize:229 Checksum:14339 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:28 connection established
2021/11/06 01:02:28 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 156 196 20 112 23 105 189 7 109 181 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:28 checksumer: &{sum:484977 oddByte:33 length:39}
2021/11/06 01:02:28 ret:  485010
2021/11/06 01:02:28 ret:  26265
2021/11/06 01:02:28 ret:  26265
2021/11/06 01:02:28 boom packet injected
2021/11/06 01:02:28 tcp packet: &{SrcPort:40132 DestPort:9000 Seq:3171380661 Ack:342990345 Flags:32785 WindowSize:229 Checksum:14338 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:30 tcp packet: &{SrcPort:45781 DestPort:9000 Seq:3563797395 Ack:4237219797 Flags:32784 WindowSize:229 Checksum:2857 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:30 tcp packet: &{SrcPort:40388 DestPort:9000 Seq:2642973863 Ack:0 Flags:40962 WindowSize:29200 Checksum:64539 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:30 tcp packet: &{SrcPort:40388 DestPort:9000 Seq:2642973864 Ack:1995609860 Flags:32784 WindowSize:229 Checksum:50289 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:30 connection established
2021/11/06 01:02:30 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 157 196 118 241 16 100 157 136 148 168 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:30 checksumer: &{sum:546516 oddByte:33 length:39}
2021/11/06 01:02:30 ret:  546549
2021/11/06 01:02:30 ret:  22269
2021/11/06 01:02:30 ret:  22269
2021/11/06 01:02:30 boom packet injected
2021/11/06 01:02:30 tcp packet: &{SrcPort:40388 DestPort:9000 Seq:2642973864 Ack:1995609860 Flags:32785 WindowSize:229 Checksum:50288 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:32 tcp packet: &{SrcPort:43907 DestPort:9000 Seq:3060312364 Ack:2440195901 Flags:32784 WindowSize:229 Checksum:34552 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:32 tcp packet: &{SrcPort:38669 DestPort:9000 Seq:3900295753 Ack:0 Flags:40962 WindowSize:29200 Checksum:31330 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:32 tcp packet: &{SrcPort:38669 DestPort:9000 Seq:3900295754 Ack:1026282604 Flags:32784 WindowSize:229 Checksum:14139 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:32 connection established
2021/11/06 01:02:32 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 151 13 61 42 77 204 232 121 202 74 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:32 checksumer: &{sum:447571 oddByte:33 length:39}
2021/11/06 01:02:32 ret:  447604
2021/11/06 01:02:32 ret:  54394
2021/11/06 01:02:32 ret:  54394
2021/11/06 01:02:32 boom packet injected
2021/11/06 01:02:32 tcp packet: &{SrcPort:38669 DestPort:9000 Seq:3900295754 Ack:1026282604 Flags:32785 WindowSize:229 Checksum:14138 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:34 tcp packet: &{SrcPort:44119 DestPort:9000 Seq:3768705233 Ack:95763187 Flags:32784 WindowSize:229 Checksum:54444 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:34 tcp packet: &{SrcPort:45693 DestPort:9000 Seq:3232820122 Ack:0 Flags:40962 WindowSize:29200 Checksum:23961 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:34 tcp packet: &{SrcPort:45693 DestPort:9000 Seq:3232820123 Ack:4016722398 Flags:32784 WindowSize:229 Checksum:58094 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:34 connection established
2021/11/06 01:02:34 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 178 125 239 104 203 62 192 176 235 155 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:34 checksumer: &{sum:490903 oddByte:33 length:39}
2021/11/06 01:02:34 ret:  490936
2021/11/06 01:02:34 ret:  32191
2021/11/06 01:02:34 ret:  32191
2021/11/06 01:02:34 boom packet injected
2021/11/06 01:02:34 tcp packet: &{SrcPort:45693 DestPort:9000 Seq:3232820123 Ack:4016722398 Flags:32785 WindowSize:229 Checksum:58093 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:36 tcp packet: &{SrcPort:40711 DestPort:9000 Seq:1678983170 Ack:623074765 Flags:32784 WindowSize:229 Checksum:44400 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:36 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:748064106 Ack:0 Flags:40962 WindowSize:29200 Checksum:30076 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:36 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:748064107 Ack:272976956 Flags:32784 WindowSize:229 Checksum:55240 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:36 connection established
2021/11/06 01:02:36 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 133 20 16 67 197 156 44 150 141 107 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:36 checksumer: &{sum:459155 oddByte:33 length:39}
2021/11/06 01:02:36 ret:  459188
2021/11/06 01:02:36 ret:  443
2021/11/06 01:02:36 ret:  443
2021/11/06 01:02:36 boom packet injected
2021/11/06 01:02:36 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:748064107 Ack:272976956 Flags:32785 WindowSize:229 Checksum:55239 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:38 tcp packet: &{SrcPort:40132 DestPort:9000 Seq:3171380662 Ack:342990346 Flags:32784 WindowSize:229 Checksum:59871 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:38 tcp packet: &{SrcPort:45612 DestPort:9000 Seq:3222762702 Ack:0 Flags:40962 WindowSize:29200 Checksum:50605 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:38 tcp packet: &{SrcPort:45612 DestPort:9000 Seq:3222762703 Ack:3218706687 Flags:32784 WindowSize:229 Checksum:11218 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:38 connection established
2021/11/06 01:02:38 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 178 44 191 216 10 95 192 23 116 207 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:38 checksumer: &{sum:481071 oddByte:33 length:39}
2021/11/06 01:02:38 ret:  481104
2021/11/06 01:02:38 ret:  22359
2021/11/06 01:02:38 ret:  22359
2021/11/06 01:02:38 boom packet injected
2021/11/06 01:02:38 tcp packet: &{SrcPort:45612 DestPort:9000 Seq:3222762703 Ack:3218706687 Flags:32785 WindowSize:229 Checksum:11217 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:40 tcp packet: &{SrcPort:40388 DestPort:9000 Seq:2642973865 Ack:1995609861 Flags:32784 WindowSize:229 Checksum:30286 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:40 tcp packet: &{SrcPort:34582 DestPort:9000 Seq:2434192377 Ack:0 Flags:40962 WindowSize:29200 Checksum:47305 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:40 tcp packet: &{SrcPort:34582 DestPort:9000 Seq:2434192378 Ack:1391047324 Flags:32784 WindowSize:229 Checksum:25199 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:40 connection established
2021/11/06 01:02:40 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 135 22 82 232 43 252 145 22 211 250 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:40 checksumer: &{sum:530408 oddByte:33 length:39}
2021/11/06 01:02:40 ret:  530441
2021/11/06 01:02:40 ret:  6161
2021/11/06 01:02:40 ret:  6161
2021/11/06 01:02:40 boom packet injected
2021/11/06 01:02:40 tcp packet: &{SrcPort:34582 DestPort:9000 Seq:2434192378 Ack:1391047324 Flags:32785 WindowSize:229 Checksum:25198 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:42 tcp packet: &{SrcPort:38669 DestPort:9000 Seq:3900295755 Ack:1026282605 Flags:32784 WindowSize:229 Checksum:59670 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:42 tcp packet: &{SrcPort:34515 DestPort:9000 Seq:1056692869 Ack:0 Flags:40962 WindowSize:29200 Checksum:64714 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:42 tcp packet: &{SrcPort:34515 DestPort:9000 Seq:1056692870 Ack:1190421398 Flags:32784 WindowSize:229 Checksum:63898 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:42 connection established
2021/11/06 01:02:42 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 134 211 70 242 220 246 62 251 218 134 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:42 checksumer: &{sum:608832 oddByte:33 length:39}
2021/11/06 01:02:42 ret:  608865
2021/11/06 01:02:42 ret:  19050
2021/11/06 01:02:42 ret:  19050
2021/11/06 01:02:42 boom packet injected
2021/11/06 01:02:42 tcp packet: &{SrcPort:34515 DestPort:9000 Seq:1056692870 Ack:1190421398 Flags:32785 WindowSize:229 Checksum:63897 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:44 tcp packet: &{SrcPort:45693 DestPort:9000 Seq:3232820124 Ack:4016722399 Flags:32784 WindowSize:229 Checksum:38091 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:44 tcp packet: &{SrcPort:40192 DestPort:9000 Seq:3832894263 Ack:0 Flags:40962 WindowSize:29200 Checksum:49313 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:44 tcp packet: &{SrcPort:40192 DestPort:9000 Seq:3832894264 Ack:1494894039 Flags:32784 WindowSize:229 Checksum:49467 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:44 connection established
2021/11/06 01:02:44 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 157 0 89 24 191 55 228 117 83 56 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:44 checksumer: &{sum:395884 oddByte:33 length:39}
2021/11/06 01:02:44 ret:  395917
2021/11/06 01:02:44 ret:  2707
2021/11/06 01:02:44 ret:  2707
2021/11/06 01:02:44 boom packet injected
2021/11/06 01:02:44 tcp packet: &{SrcPort:40192 DestPort:9000 Seq:3832894264 Ack:1494894039 Flags:32785 WindowSize:229 Checksum:49466 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:46 tcp packet: &{SrcPort:34068 DestPort:9000 Seq:748064108 Ack:272976957 Flags:32784 WindowSize:229 Checksum:35237 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:46 tcp packet: &{SrcPort:41792 DestPort:9000 Seq:1197555987 Ack:0 Flags:40962 WindowSize:29200 Checksum:25033 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:46 tcp packet: &{SrcPort:41792 DestPort:9000 Seq:1197555988 Ack:61735424 Flags:32784 WindowSize:229 Checksum:62422 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:46 connection established
2021/11/06 01:02:46 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 163 64 3 172 123 96 71 97 65 20 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:46 checksumer: &{sum:445993 oddByte:33 length:39}
2021/11/06 01:02:46 ret:  446026
2021/11/06 01:02:46 ret:  52816
2021/11/06 01:02:46 ret:  52816
2021/11/06 01:02:46 boom packet injected
2021/11/06 01:02:46 tcp packet: &{SrcPort:41792 DestPort:9000 Seq:1197555988 Ack:61735424 Flags:32785 WindowSize:229 Checksum:62421 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:48 tcp packet: &{SrcPort:38219 DestPort:9000 Seq:319790595 Ack:0 Flags:40962 WindowSize:29200 Checksum:16207 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:48 tcp packet: &{SrcPort:38219 DestPort:9000 Seq:319790596 Ack:3310558843 Flags:32784 WindowSize:229 Checksum:60265 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:48 connection established
2021/11/06 01:02:48 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 149 75 197 81 151 219 19 15 158 4 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:48 checksumer: &{sum:432162 oddByte:33 length:39}
2021/11/06 01:02:48 ret:  432195
2021/11/06 01:02:48 ret:  38985
2021/11/06 01:02:48 ret:  38985
2021/11/06 01:02:48 boom packet injected
2021/11/06 01:02:48 tcp packet: &{SrcPort:38219 DestPort:9000 Seq:319790596 Ack:3310558843 Flags:32785 WindowSize:229 Checksum:60263 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:48 tcp packet: &{SrcPort:45612 DestPort:9000 Seq:3222762704 Ack:3218706688 Flags:32784 WindowSize:229 Checksum:56750 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:50 tcp packet: &{SrcPort:34582 DestPort:9000 Seq:2434192379 Ack:1391047325 Flags:32784 WindowSize:229 Checksum:5196 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:50 tcp packet: &{SrcPort:46670 DestPort:9000 Seq:713488503 Ack:0 Flags:40962 WindowSize:29200 Checksum:42127 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:50 tcp packet: &{SrcPort:46670 DestPort:9000 Seq:713488504 Ack:4206003364 Flags:32784 WindowSize:229 Checksum:44369 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:50 connection established
2021/11/06 01:02:50 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 182 78 250 176 254 4 42 134 248 120 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:50 checksumer: &{sum:462672 oddByte:33 length:39}
2021/11/06 01:02:50 ret:  462705
2021/11/06 01:02:50 ret:  3960
2021/11/06 01:02:50 ret:  3960
2021/11/06 01:02:50 boom packet injected
2021/11/06 01:02:50 tcp packet: &{SrcPort:46670 DestPort:9000 Seq:713488504 Ack:4206003364 Flags:32785 WindowSize:229 Checksum:44368 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:52 tcp packet: &{SrcPort:34515 DestPort:9000 Seq:1056692871 Ack:1190421399 Flags:32784 WindowSize:229 Checksum:43896 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:52 tcp packet: &{SrcPort:45420 DestPort:9000 Seq:489856275 Ack:0 Flags:40962 WindowSize:29200 Checksum:2649 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:52 tcp packet: &{SrcPort:45420 DestPort:9000 Seq:489856276 Ack:1731434633 Flags:32784 WindowSize:229 Checksum:35556 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:52 connection established
2021/11/06 01:02:52 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 177 108 103 50 17 233 29 50 157 20 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:52 checksumer: &{sum:449123 oddByte:33 length:39}
2021/11/06 01:02:52 ret:  449156
2021/11/06 01:02:52 ret:  55946
2021/11/06 01:02:52 ret:  55946
2021/11/06 01:02:52 boom packet injected
2021/11/06 01:02:52 tcp packet: &{SrcPort:45420 DestPort:9000 Seq:489856276 Ack:1731434633 Flags:32785 WindowSize:229 Checksum:35555 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:54 tcp packet: &{SrcPort:40192 DestPort:9000 Seq:3832894265 Ack:1494894040 Flags:32784 WindowSize:229 Checksum:29464 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:54 tcp packet: &{SrcPort:43516 DestPort:9000 Seq:1627670137 Ack:0 Flags:40962 WindowSize:29200 Checksum:7361 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:54 tcp packet: &{SrcPort:43516 DestPort:9000 Seq:1627670138 Ack:260593921 Flags:32784 WindowSize:229 Checksum:11440 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:54 connection established
2021/11/06 01:02:54 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 169 252 15 134 210 97 97 4 70 122 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:54 checksumer: &{sum:487089 oddByte:33 length:39}
2021/11/06 01:02:54 ret:  487122
2021/11/06 01:02:54 ret:  28377
2021/11/06 01:02:54 ret:  28377
2021/11/06 01:02:54 boom packet injected
2021/11/06 01:02:54 tcp packet: &{SrcPort:43516 DestPort:9000 Seq:1627670138 Ack:260593921 Flags:32785 WindowSize:229 Checksum:11439 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:56 tcp packet: &{SrcPort:41792 DestPort:9000 Seq:1197555989 Ack:61735425 Flags:32784 WindowSize:229 Checksum:42418 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:56 tcp packet: &{SrcPort:44693 DestPort:9000 Seq:2150838240 Ack:0 Flags:40962 WindowSize:29200 Checksum:1986 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:56 tcp packet: &{SrcPort:44693 DestPort:9000 Seq:2150838241 Ack:234791593 Flags:32784 WindowSize:229 Checksum:51138 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:56 connection established
2021/11/06 01:02:56 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 174 149 13 253 28 9 128 51 47 225 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:56 checksumer: &{sum:506886 oddByte:33 length:39}
2021/11/06 01:02:56 ret:  506919
2021/11/06 01:02:56 ret:  48174
2021/11/06 01:02:56 ret:  48174
2021/11/06 01:02:56 boom packet injected
2021/11/06 01:02:56 tcp packet: &{SrcPort:44693 DestPort:9000 Seq:2150838241 Ack:234791593 Flags:32785 WindowSize:229 Checksum:51137 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:58 tcp packet: &{SrcPort:38219 DestPort:9000 Seq:319790597 Ack:3310558844 Flags:32784 WindowSize:229 Checksum:40262 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:58 tcp packet: &{SrcPort:44709 DestPort:9000 Seq:661396249 Ack:0 Flags:40962 WindowSize:29200 Checksum:28016 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.200
2021/11/06 01:02:58 tcp packet: &{SrcPort:44709 DestPort:9000 Seq:661396250 Ack:3813920086 Flags:32784 WindowSize:229 Checksum:9628 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.200
2021/11/06 01:02:58 connection established
2021/11/06 01:02:58 calling checksumTCP: 10.244.4.23 10.244.3.200 [35 40 174 165 227 82 70 182 39 108 27 26 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/06 01:02:58 checksumer: &{sum:475289 oddByte:33 length:39}
2021/11/06 01:02:58 ret:  475322
2021/11/06 01:02:58 ret:  16577
2021/11/06 01:02:58 ret:  16577
2021/11/06 01:02:58 boom packet injected
2021/11/06 01:02:58 tcp packet: &{SrcPort:44709 DestPort:9000 Seq:661396250 Ack:3813920086 Flags:32785 WindowSize:229 Checksum:9627 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.200

Nov  6 01:02:58.765: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:02:58.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6070" for this suite.


• [SLOW TEST:80.445 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":468,"failed":0}
Nov  6 01:02:58.777: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:48.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-9374
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov  6 01:01:48.986: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov  6 01:01:49.015: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:51.019: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:53.018: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:55.019: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:57.018: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:01:59.020: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:01.019: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:03.018: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:05.018: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:07.020: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov  6 01:02:09.021: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov  6 01:02:09.025: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov  6 01:02:11.029: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov  6 01:02:19.065: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov  6 01:02:19.065: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Nov  6 01:02:19.087: INFO: Service node-port-service in namespace nettest-9374 found.
Nov  6 01:02:19.101: INFO: Service session-affinity-service in namespace nettest-9374 found.
STEP: Waiting for NodePort service to expose endpoint
Nov  6 01:02:20.104: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Nov  6 01:02:21.107: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) 10.10.190.207 (node) --> 10.233.13.202:80 (config.clusterIP)
Nov  6 01:02:21.111: INFO: Going to poll 10.233.13.202 on port 80 at least 0 times, with a maximum of 34 tries before failing
Nov  6 01:02:21.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.13.202:80/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:21.113: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:21.227: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Nov  6 01:02:23.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.13.202:80/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:23.231: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:23.332: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Nov  6 01:02:25.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.233.13.202:80/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:25.338: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:25.489: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: dialing(http) 10.10.190.207 (node) --> 10.10.190.207:31965 (nodeIP)
Nov  6 01:02:25.489: INFO: Going to poll 10.10.190.207 on port 31965 at least 0 times, with a maximum of 34 tries before failing
Nov  6 01:02:25.492: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:25.492: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:25.587: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:25.587: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:27.590: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:27.590: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:27.913: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:27.913: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:29.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:29.918: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:30.001: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:30.001: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:32.005: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:32.005: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:32.258: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:32.258: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:34.264: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:34.264: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:34.350: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:34.350: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:36.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:36.354: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:36.441: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:36.441: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:38.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:38.448: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:38.582: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:38.582: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:40.588: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:40.588: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:40.725: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:40.725: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:42.729: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:42.729: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:42.817: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:42.817: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:44.821: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:44.821: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:44.990: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:44.990: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:46.996: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:46.996: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:47.079: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:47.079: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:49.084: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:49.084: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:49.178: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:49.178: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:51.182: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:51.182: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:51.280: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:51.280: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:53.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:53.284: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:53.383: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:53.383: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:55.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:55.386: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:55.829: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:55.829: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:57.833: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:57.833: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:02:57.965: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:02:57.965: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:02:59.968: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:02:59.968: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:00.056: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:00.056: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:02.059: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:02.059: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:02.145: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:02.145: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:04.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:04.150: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:04.233: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:04.233: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:06.238: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:06.238: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:06.322: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:06.322: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:08.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:08.327: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:08.593: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:08.593: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:10.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:10.599: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:10.704: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:10.704: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:12.707: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:12.707: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:12.808: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:12.809: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:14.812: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:14.812: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:14.905: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:14.905: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:16.911: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:16.911: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:16.999: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:16.999: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:19.003: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:19.003: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:19.100: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:19.100: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:21.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:21.104: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:21.198: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:21.198: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:23.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:23.202: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:23.292: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:23.292: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:25.297: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:25.297: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:25.392: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:25.392: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:27.397: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:27.397: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:27.581: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:27.581: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:29.587: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:29.587: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:29.686: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:29.686: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:31.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:31.689: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:31.871: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:31.871: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:33.875: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:33.875: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:34.304: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:34.304: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:36.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\s*$'] Namespace:nettest-9374 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov  6 01:03:36.307: INFO: >>> kubeConfig: /root/.kube/config
Nov  6 01:03:36.389: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Nov  6 01:03:36.389: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Nov  6 01:03:38.391: INFO: 
Output of kubectl describe pod nettest-9374/netserver-0:

Nov  6 01:03:38.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-9374 describe pod netserver-0 --namespace=nettest-9374'
Nov  6 01:03:38.584: INFO: stderr: ""
Nov  6 01:03:38.584: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-9374\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Sat, 06 Nov 2021 01:01:49 +0000\nLabels:       selector-768471f3-4d79-4697-b4c1-1d50a9ed4292=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.199\"\n                    ],\n                    \"mac\": \"be:ec:dd:18:32:58\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.199\"\n                    ],\n                    \"mac\": \"be:ec:dd:18:32:58\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.199\nIPs:\n  IP:  10.244.3.199\nContainers:\n  webserver:\n    Container ID:  docker://0fe10b1c58336ccb1f096c5f541bb2cc398a50f11539f082d54c74475a601482\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 06 Nov 2021 01:01:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jqt5 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-5jqt5:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-9374/netserver-0 to node1\n  Normal  Pulling    107s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     107s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 290.317364ms\n  Normal  Created    106s  kubelet            Created container webserver\n  Normal  Started    106s  kubelet            Started container webserver\n"
Nov  6 01:03:38.584: INFO: Name:         netserver-0
Namespace:    nettest-9374
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Sat, 06 Nov 2021 01:01:49 +0000
Labels:       selector-768471f3-4d79-4697-b4c1-1d50a9ed4292=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.199"
                    ],
                    "mac": "be:ec:dd:18:32:58",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.199"
                    ],
                    "mac": "be:ec:dd:18:32:58",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.199
IPs:
  IP:  10.244.3.199
Containers:
  webserver:
    Container ID:  docker://0fe10b1c58336ccb1f096c5f541bb2cc398a50f11539f082d54c74475a601482
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 06 Nov 2021 01:01:52 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5jqt5 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-5jqt5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-9374/netserver-0 to node1
  Normal  Pulling    107s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     107s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 290.317364ms
  Normal  Created    106s  kubelet            Created container webserver
  Normal  Started    106s  kubelet            Started container webserver

Nov  6 01:03:38.584: INFO: 
Output of kubectl describe pod nettest-9374/netserver-1:

Nov  6 01:03:38.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-9374 describe pod netserver-1 --namespace=nettest-9374'
Nov  6 01:03:38.750: INFO: stderr: ""
Nov  6 01:03:38.751: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-9374\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Sat, 06 Nov 2021 01:01:49 +0000\nLabels:       selector-768471f3-4d79-4697-b4c1-1d50a9ed4292=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.29\"\n                    ],\n                    \"mac\": \"66:22:3e:bb:27:7e\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.29\"\n                    ],\n                    \"mac\": \"66:22:3e:bb:27:7e\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.29\nIPs:\n  IP:  10.244.4.29\nContainers:\n  webserver:\n    Container ID:  docker://b3bf6cf322f2d05f53bd9f382d539b5ad3abd5e3ef59e8e237c170a5118710e2\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 06 Nov 2021 01:01:53 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zk62v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-zk62v:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-9374/netserver-1 to node2\n  Normal  Pulling    106s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     106s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 519.078789ms\n  Normal  Created    106s  kubelet            Created container webserver\n  Normal  Started    105s  kubelet            Started container webserver\n"
Nov  6 01:03:38.751: INFO: Name:         netserver-1
Namespace:    nettest-9374
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Sat, 06 Nov 2021 01:01:49 +0000
Labels:       selector-768471f3-4d79-4697-b4c1-1d50a9ed4292=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.29"
                    ],
                    "mac": "66:22:3e:bb:27:7e",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.29"
                    ],
                    "mac": "66:22:3e:bb:27:7e",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.29
IPs:
  IP:  10.244.4.29
Containers:
  webserver:
    Container ID:  docker://b3bf6cf322f2d05f53bd9f382d539b5ad3abd5e3ef59e8e237c170a5118710e2
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 06 Nov 2021 01:01:53 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zk62v (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-zk62v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  109s  default-scheduler  Successfully assigned nettest-9374/netserver-1 to node2
  Normal  Pulling    106s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     106s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 519.078789ms
  Normal  Created    106s  kubelet            Created container webserver
  Normal  Started    105s  kubelet            Started container webserver

Nov  6 01:03:38.751: FAIL: failed dialing endpoint, failed to find expected endpoints, 
tries 34
Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000683980)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000683980)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000683980, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-9374".
STEP: Found 20 events.
Nov  6 01:03:38.756: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for host-test-container-pod: { } Scheduled: Successfully assigned nettest-9374/host-test-container-pod to node1
Nov  6 01:03:38.756: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned nettest-9374/netserver-0 to node1
Nov  6 01:03:38.756: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned nettest-9374/netserver-1 to node2
Nov  6 01:03:38.756: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned nettest-9374/test-container-pod to node2
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:51 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:51 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 290.317364ms
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:52 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:52 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 519.078789ms
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:52 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:01:53 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Created: Created container agnhost-container
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 284.739892ms
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Started: Started container agnhost-container
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:14 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:15 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 355.924911ms
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:15 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
Nov  6 01:03:38.757: INFO: At 2021-11-06 01:02:15 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
Nov  6 01:03:38.759: INFO: POD                      NODE   PHASE    GRACE  CONDITIONS
Nov  6 01:03:38.759: INFO: host-test-container-pod  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:11 +0000 UTC  }]
Nov  6 01:03:38.759: INFO: netserver-0              node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:49 +0000 UTC  }]
Nov  6 01:03:38.759: INFO: netserver-1              node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:49 +0000 UTC  }]
Nov  6 01:03:38.759: INFO: test-container-pod       node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:02:11 +0000 UTC  }]
Nov  6 01:03:38.759: INFO: 
Nov  6 01:03:38.764: INFO: 
Logging node info for node master1
Nov  6 01:03:38.770: INFO: Node Info: &Node{ObjectMeta:{master1    acabf68f-e6fa-4376-87a7-953399a106b3 82205 0 2021-11-05 20:58:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:38.771: INFO: 
Logging kubelet events for node master1
Nov  6 01:03:38.781: INFO: 
Logging pods the kubelet thinks is on node master1
Nov  6 01:03:38.796: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:03:38.796: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:38.796: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container docker-registry ready: true, restart count 0
Nov  6 01:03:38.796: INFO: 	Container nginx ready: true, restart count 0
Nov  6 01:03:38.796: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:38.796: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:38.796: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:38.796: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov  6 01:03:38.796: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov  6 01:03:38.796: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:03:38.796: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:03:38.796: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.796: INFO: 	Container coredns ready: true, restart count 2
W1106 01:03:38.810791      27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:38.890: INFO: 
Latency metrics for node master1
Nov  6 01:03:38.890: INFO: 
Logging node info for node master2
Nov  6 01:03:38.893: INFO: Node Info: &Node{ObjectMeta:{master2    004d4571-8588-4d18-93d0-ad0af4174866 82173 0 2021-11-05 20:59:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:29 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:29 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:29 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:29 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:38.893: INFO: 
Logging kubelet events for node master2
Nov  6 01:03:38.895: INFO: 
Logging pods the kubelet thinks is on node master2
Nov  6 01:03:38.908: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:38.908: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:03:38.908: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:38.908: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container nfd-controller ready: true, restart count 0
Nov  6 01:03:38.908: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:38.908: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:38.908: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:03:38.908: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:03:38.908: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:38.908: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:03:38.908: INFO: 	Container kube-flannel ready: true, restart count 3
W1106 01:03:38.920639      27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:38.980: INFO: 
Latency metrics for node master2
Nov  6 01:03:38.980: INFO: 
Logging node info for node master3
Nov  6 01:03:38.983: INFO: Node Info: &Node{ObjectMeta:{master3    d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 82196 0 2021-11-05 20:59:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:38.984: INFO: 
Logging kubelet events for node master3
Nov  6 01:03:38.986: INFO: 
Logging pods the kubelet thinks is on node master3
Nov  6 01:03:39.000: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:03:39.000: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:39.000: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:39.000: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container autoscaler ready: true, restart count 1
Nov  6 01:03:39.000: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:39.000: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:03:39.000: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:03:39.000: INFO: 	Container kube-flannel ready: true, restart count 1
Nov  6 01:03:39.000: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container coredns ready: true, restart count 1
Nov  6 01:03:39.000: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.000: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.000: INFO: 	Container node-exporter ready: true, restart count 0
W1106 01:03:39.014105      27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:39.081: INFO: 
Latency metrics for node master3
Nov  6 01:03:39.081: INFO: 
Logging node info for node node1
Nov  6 01:03:39.083: INFO: Node Info: &Node{ObjectMeta:{node1    290b18e7-da33-4da8-b78a-8a7f28c49abf 82203 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:53:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:39.084: INFO: 
Logging kubelet events for node node1
Nov  6 01:03:39.085: INFO: 
Logging pods the kubelet thinks is on node node1
Nov  6 01:03:39.106: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:39.106: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:03:39.106: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container config-reloader ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container grafana ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container prometheus ready: true, restart count 1
Nov  6 01:03:39.106: INFO: up-down-2-kmphz started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:39.106: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:03:39.106: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov  6 01:03:39.106: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov  6 01:03:39.106: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:03:39.106: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.106: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:03:39.106: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov  6 01:03:39.106: INFO: up-down-2-mvxp8 started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:39.106: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:39.106: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:03:39.106: INFO: 	Container kube-flannel ready: true, restart count 3
Nov  6 01:03:39.106: INFO: nodeport-update-service-n26rr started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:03:39.107: INFO: netserver-0 started at 2021-11-06 01:01:49 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:39.107: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:39.107: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:03:39.107: INFO: 	Container init ready: false, restart count 0
Nov  6 01:03:39.107: INFO: 	Container install ready: false, restart count 0
Nov  6 01:03:39.107: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.107: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:39.107: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container tas-extender ready: true, restart count 0
Nov  6 01:03:39.107: INFO: execpodwsgzw started at 2021-11-06 01:01:33 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:03:39.107: INFO: host-test-container-pod started at 2021-11-06 01:02:11 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.107: INFO: 	Container agnhost-container ready: true, restart count 0
W1106 01:03:39.120449      27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:39.350: INFO: 
Latency metrics for node node1
Nov  6 01:03:39.351: INFO: 
Logging node info for node node2
Nov  6 01:03:39.353: INFO: Node Info: &Node{ObjectMeta:{node2    7d7e71f0-82d7-49ba-b69a-56600dd59b3f 82198 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-06 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:39.354: INFO: 
Logging kubelet events for node node2
Nov  6 01:03:39.356: INFO: 
Logging pods the kubelet thinks is on node node2
Nov  6 01:03:39.369: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.369: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:03:39.369: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.369: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:03:39.370: INFO: netserver-1 started at 2021-11-06 01:01:49 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:39.370: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:03:39.370: INFO: 	Container init ready: false, restart count 0
Nov  6 01:03:39.370: INFO: 	Container install ready: false, restart count 0
Nov  6 01:03:39.370: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.370: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:39.370: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov  6 01:03:39.370: INFO: up-down-3-pmm2b started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:39.370: INFO: nodeport-update-service-jqdx5 started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:03:39.370: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Init container install-cni ready: true, restart count 1
Nov  6 01:03:39.370: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:03:39.370: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:39.370: INFO: up-down-3-bfkm8 started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:39.370: INFO: up-down-2-c8ng6 started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:39.370: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:39.370: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov  6 01:03:39.370: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:03:39.370: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:03:39.370: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.370: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:03:39.370: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:03:39.370: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:39.370: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov  6 01:03:39.370: INFO: test-container-pod started at 2021-11-06 01:02:11 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:39.370: INFO: up-down-3-l8nlv started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:39.370: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:39.370: INFO: verify-service-up-host-exec-pod started at  (0+0 container statuses recorded)
W1106 01:03:39.391775      27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:39.702: INFO: 
Latency metrics for node node2
Nov  6 01:03:39.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9374" for this suite.


• Failure [110.855 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Nov  6 01:03:38.751: failed dialing endpoint, failed to find expected endpoints, 
    tries 34
    Command curl -g -q -s --max-time 15 --connect-timeout 1 http://10.10.190.207:31965/hostName
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:01:15.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-8943
Nov  6 01:01:15.440: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-8943
I1106 01:01:15.451582      25 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-8943, replica count: 2
I1106 01:01:18.506169      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:21.506680      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:24.509228      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:27.510062      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:30.512028      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:01:33.515129      25 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov  6 01:01:33.515: INFO: Creating new exec pod
Nov  6 01:01:38.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Nov  6 01:01:39.018: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:39.018: INFO: stdout: "nodeport-update-service-n26rr"
Nov  6 01:01:39.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.59.223 80'
Nov  6 01:01:39.271: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.59.223 80\nConnection to 10.233.59.223 80 port [tcp/http] succeeded!\n"
Nov  6 01:01:39.271: INFO: stdout: "nodeport-update-service-n26rr"
Nov  6 01:01:39.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:39.605: INFO: rc: 1
Nov  6 01:01:39.605: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:40.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:40.972: INFO: rc: 1
Nov  6 01:01:40.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:41.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:42.007: INFO: rc: 1
Nov  6 01:01:42.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:42.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:43.254: INFO: rc: 1
Nov  6 01:01:43.254: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:43.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:43.860: INFO: rc: 1
Nov  6 01:01:43.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:44.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:45.430: INFO: rc: 1
Nov  6 01:01:45.430: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:45.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:46.238: INFO: rc: 1
Nov  6 01:01:46.238: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:46.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:47.071: INFO: rc: 1
Nov  6 01:01:47.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:47.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:47.992: INFO: rc: 1
Nov  6 01:01:47.992: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:48.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:48.899: INFO: rc: 1
Nov  6 01:01:48.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:49.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:49.882: INFO: rc: 1
Nov  6 01:01:49.882: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:50.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:50.863: INFO: rc: 1
Nov  6 01:01:50.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:51.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:51.961: INFO: rc: 1
Nov  6 01:01:51.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:52.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:53.042: INFO: rc: 1
Nov  6 01:01:53.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31925
+ echo hostName
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:53.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:53.917: INFO: rc: 1
Nov  6 01:01:53.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:54.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:54.857: INFO: rc: 1
Nov  6 01:01:54.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:55.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:55.864: INFO: rc: 1
Nov  6 01:01:55.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:56.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:57.174: INFO: rc: 1
Nov  6 01:01:57.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:57.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:57.921: INFO: rc: 1
Nov  6 01:01:57.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:58.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:58.870: INFO: rc: 1
Nov  6 01:01:58.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31925
+ echo hostName
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:01:59.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:01:59.864: INFO: rc: 1
Nov  6 01:01:59.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:00.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:00.868: INFO: rc: 1
Nov  6 01:02:00.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:01.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:01.975: INFO: rc: 1
Nov  6 01:02:01.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31925
+ echo hostName
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:02.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:02.850: INFO: rc: 1
Nov  6 01:02:02.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:03.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:03.888: INFO: rc: 1
Nov  6 01:02:03.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName+ 
nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:04.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:04.941: INFO: rc: 1
Nov  6 01:02:04.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:05.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:05.860: INFO: rc: 1
Nov  6 01:02:05.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:06.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:06.987: INFO: rc: 1
Nov  6 01:02:06.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:07.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:09.051: INFO: rc: 1
Nov  6 01:02:09.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:09.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:09.858: INFO: rc: 1
Nov  6 01:02:09.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:10.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:10.974: INFO: rc: 1
Nov  6 01:02:10.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:11.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:11.888: INFO: rc: 1
Nov  6 01:02:11.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:12.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:12.865: INFO: rc: 1
Nov  6 01:02:12.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:13.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:13.961: INFO: rc: 1
Nov  6 01:02:13.961: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:14.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:15.090: INFO: rc: 1
Nov  6 01:02:15.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:15.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:16.050: INFO: rc: 1
Nov  6 01:02:16.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ + echonc -v hostName -t -w
 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:16.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:17.284: INFO: rc: 1
Nov  6 01:02:17.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:17.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:17.843: INFO: rc: 1
Nov  6 01:02:17.843: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:18.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:18.859: INFO: rc: 1
Nov  6 01:02:18.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:19.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:20.217: INFO: rc: 1
Nov  6 01:02:20.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:20.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:20.846: INFO: rc: 1
Nov  6 01:02:20.846: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:21.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:22.730: INFO: rc: 1
Nov  6 01:02:22.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:23.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:23.868: INFO: rc: 1
Nov  6 01:02:23.868: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:24.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:24.878: INFO: rc: 1
Nov  6 01:02:24.878: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:25.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:26.099: INFO: rc: 1
Nov  6 01:02:26.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:26.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:27.114: INFO: rc: 1
Nov  6 01:02:27.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:27.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:27.917: INFO: rc: 1
Nov  6 01:02:27.917: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:28.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:29.008: INFO: rc: 1
Nov  6 01:02:29.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:29.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:30.005: INFO: rc: 1
Nov  6 01:02:30.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ + ncecho -v -t -w 2 hostName 10.10.190.207 31925

nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:30.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:30.976: INFO: rc: 1
Nov  6 01:02:30.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:31.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:31.899: INFO: rc: 1
Nov  6 01:02:31.899: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:32.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:32.844: INFO: rc: 1
Nov  6 01:02:32.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:33.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:33.870: INFO: rc: 1
Nov  6 01:02:33.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:34.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:34.864: INFO: rc: 1
Nov  6 01:02:34.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:35.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:35.847: INFO: rc: 1
Nov  6 01:02:35.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:36.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:36.876: INFO: rc: 1
Nov  6 01:02:36.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:37.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:37.863: INFO: rc: 1
Nov  6 01:02:37.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:38.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:38.865: INFO: rc: 1
Nov  6 01:02:38.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:39.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:39.860: INFO: rc: 1
Nov  6 01:02:39.860: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:40.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:41.063: INFO: rc: 1
Nov  6 01:02:41.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:41.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:41.871: INFO: rc: 1
Nov  6 01:02:41.871: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:42.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:42.862: INFO: rc: 1
Nov  6 01:02:42.862: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:43.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:43.964: INFO: rc: 1
Nov  6 01:02:43.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:44.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:44.993: INFO: rc: 1
Nov  6 01:02:44.993: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:45.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:45.844: INFO: rc: 1
Nov  6 01:02:45.844: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:46.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:46.840: INFO: rc: 1
Nov  6 01:02:46.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:47.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:47.852: INFO: rc: 1
Nov  6 01:02:47.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:48.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:48.852: INFO: rc: 1
Nov  6 01:02:48.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:49.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:49.831: INFO: rc: 1
Nov  6 01:02:49.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:50.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:50.874: INFO: rc: 1
Nov  6 01:02:50.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:51.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:51.850: INFO: rc: 1
Nov  6 01:02:51.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:52.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:52.870: INFO: rc: 1
Nov  6 01:02:52.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:53.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:53.874: INFO: rc: 1
Nov  6 01:02:53.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:54.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:54.850: INFO: rc: 1
Nov  6 01:02:54.850: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31925
+ echo hostName
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:55.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:55.857: INFO: rc: 1
Nov  6 01:02:55.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:56.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:57.272: INFO: rc: 1
Nov  6 01:02:57.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31925
+ echo hostName
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:57.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:57.842: INFO: rc: 1
Nov  6 01:02:57.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:58.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:58.864: INFO: rc: 1
Nov  6 01:02:58.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:02:59.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:02:59.852: INFO: rc: 1
Nov  6 01:02:59.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:00.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:00.864: INFO: rc: 1
Nov  6 01:03:00.864: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:01.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:01.853: INFO: rc: 1
Nov  6 01:03:01.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:02.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:02.857: INFO: rc: 1
Nov  6 01:03:02.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:03.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:03.865: INFO: rc: 1
Nov  6 01:03:03.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:04.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:04.853: INFO: rc: 1
Nov  6 01:03:04.853: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:05.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:05.857: INFO: rc: 1
Nov  6 01:03:05.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:06.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:06.831: INFO: rc: 1
Nov  6 01:03:06.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:07.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:08.597: INFO: rc: 1
Nov  6 01:03:08.597: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:08.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:08.851: INFO: rc: 1
Nov  6 01:03:08.851: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:09.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:09.876: INFO: rc: 1
Nov  6 01:03:09.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:10.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:10.876: INFO: rc: 1
Nov  6 01:03:10.876: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:11.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:11.857: INFO: rc: 1
Nov  6 01:03:11.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:12.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:12.859: INFO: rc: 1
Nov  6 01:03:12.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:13.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:13.856: INFO: rc: 1
Nov  6 01:03:13.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:14.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:14.851: INFO: rc: 1
Nov  6 01:03:14.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:15.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:15.834: INFO: rc: 1
Nov  6 01:03:15.834: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:16.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:16.846: INFO: rc: 1
Nov  6 01:03:16.846: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:17.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:17.856: INFO: rc: 1
Nov  6 01:03:17.856: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:18.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:18.872: INFO: rc: 1
Nov  6 01:03:18.872: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:19.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:19.869: INFO: rc: 1
Nov  6 01:03:19.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:20.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:20.898: INFO: rc: 1
Nov  6 01:03:20.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:21.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:21.866: INFO: rc: 1
Nov  6 01:03:21.866: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:22.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:22.965: INFO: rc: 1
Nov  6 01:03:22.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:23.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:23.854: INFO: rc: 1
Nov  6 01:03:23.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:24.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:25.223: INFO: rc: 1
Nov  6 01:03:25.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:25.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:25.856: INFO: rc: 1
Nov  6 01:03:25.857: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:26.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:26.865: INFO: rc: 1
Nov  6 01:03:26.865: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:27.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:28.126: INFO: rc: 1
Nov  6 01:03:28.126: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:28.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:28.875: INFO: rc: 1
Nov  6 01:03:28.875: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:29.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:29.852: INFO: rc: 1
Nov  6 01:03:29.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:30.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:30.835: INFO: rc: 1
Nov  6 01:03:30.835: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:31.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:32.060: INFO: rc: 1
Nov  6 01:03:32.060: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:32.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:33.005: INFO: rc: 1
Nov  6 01:03:33.005: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:33.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:34.313: INFO: rc: 1
Nov  6 01:03:34.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:34.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:34.863: INFO: rc: 1
Nov  6 01:03:34.863: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:35.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:35.826: INFO: rc: 1
Nov  6 01:03:35.826: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:36.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:36.852: INFO: rc: 1
Nov  6 01:03:36.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:37.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:38.413: INFO: rc: 1
Nov  6 01:03:38.413: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:38.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:38.859: INFO: rc: 1
Nov  6 01:03:38.859: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:39.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:39.854: INFO: rc: 1
Nov  6 01:03:39.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:39.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925'
Nov  6 01:03:40.105: INFO: rc: 1
Nov  6 01:03:40.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8943 exec execpodwsgzw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31925:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31925
nc: connect to 10.10.190.207 port 31925 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov  6 01:03:40.106: FAIL: Unexpected error:
    <*errors.errorString | 0xc004497000>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31925 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31925 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001347200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001347200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001347200, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Nov  6 01:03:40.107: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-8943".
STEP: Found 17 events.
Nov  6 01:03:40.134: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodwsgzw: { } Scheduled: Successfully assigned services-8943/execpodwsgzw to node1
Nov  6 01:03:40.134: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-jqdx5: { } Scheduled: Successfully assigned services-8943/nodeport-update-service-jqdx5 to node2
Nov  6 01:03:40.134: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-n26rr: { } Scheduled: Successfully assigned services-8943/nodeport-update-service-n26rr to node1
Nov  6 01:03:40.134: INFO: At 2021-11-06 01:01:15 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-jqdx5
Nov  6 01:03:40.134: INFO: At 2021-11-06 01:01:15 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-n26rr
Nov  6 01:03:40.134: INFO: At 2021-11-06 01:01:17 +0000 UTC - event for nodeport-update-service-n26rr: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:17 +0000 UTC - event for nodeport-update-service-n26rr: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 313.584366ms
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:17 +0000 UTC - event for nodeport-update-service-n26rr: {kubelet node1} Created: Created container nodeport-update-service
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:18 +0000 UTC - event for nodeport-update-service-n26rr: {kubelet node1} Started: Started container nodeport-update-service
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:19 +0000 UTC - event for nodeport-update-service-jqdx5: {kubelet node2} Created: Created container nodeport-update-service
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:19 +0000 UTC - event for nodeport-update-service-jqdx5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:19 +0000 UTC - event for nodeport-update-service-jqdx5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 396.505636ms
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:20 +0000 UTC - event for nodeport-update-service-jqdx5: {kubelet node2} Started: Started container nodeport-update-service
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:35 +0000 UTC - event for execpodwsgzw: {kubelet node1} Started: Started container agnhost-container
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:35 +0000 UTC - event for execpodwsgzw: {kubelet node1} Created: Created container agnhost-container
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:35 +0000 UTC - event for execpodwsgzw: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov  6 01:03:40.135: INFO: At 2021-11-06 01:01:35 +0000 UTC - event for execpodwsgzw: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 309.413307ms
Nov  6 01:03:40.137: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Nov  6 01:03:40.137: INFO: execpodwsgzw                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:33 +0000 UTC  }]
Nov  6 01:03:40.137: INFO: nodeport-update-service-jqdx5  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:15 +0000 UTC  }]
Nov  6 01:03:40.137: INFO: nodeport-update-service-n26rr  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-06 01:01:15 +0000 UTC  }]
Nov  6 01:03:40.137: INFO: 
Nov  6 01:03:40.141: INFO: 
Logging node info for node master1
Nov  6 01:03:40.143: INFO: Node Info: &Node{ObjectMeta:{master1    acabf68f-e6fa-4376-87a7-953399a106b3 82205 0 2021-11-05 20:58:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:58:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:06:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:29 +0000 UTC,LastTransitionTime:2021-11-05 21:04:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 20:58:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:37 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b66bbe4d404942179ce344aa1da0c494,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:b59c0f0e-9c14-460c-acfa-6e83037bd04e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:40.144: INFO: 
Logging kubelet events for node master1
Nov  6 01:03:40.146: INFO: 
Logging pods the kubelet thinks is on node master1
Nov  6 01:03:40.168: INFO: kube-proxy-r4cf7 started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:03:40.168: INFO: kube-multus-ds-amd64-rr699 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:40.168: INFO: container-registry-65d7c44b96-dwrs5 started at 2021-11-05 21:06:01 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container docker-registry ready: true, restart count 0
Nov  6 01:03:40.168: INFO: 	Container nginx ready: true, restart count 0
Nov  6 01:03:40.168: INFO: kube-flannel-hkkhj started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:03:40.168: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:03:40.168: INFO: coredns-8474476ff8-nq2jw started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container coredns ready: true, restart count 2
Nov  6 01:03:40.168: INFO: node-exporter-lgdzv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:40.168: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:40.168: INFO: kube-apiserver-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:40.168: INFO: kube-controller-manager-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov  6 01:03:40.168: INFO: kube-scheduler-master1 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.168: INFO: 	Container kube-scheduler ready: true, restart count 0
W1106 01:03:40.180978      25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:40.246: INFO: 
Latency metrics for node master1
Nov  6 01:03:40.246: INFO: 
Logging node info for node master2
Nov  6 01:03:40.249: INFO: Node Info: &Node{ObjectMeta:{master2    004d4571-8588-4d18-93d0-ad0af4174866 82212 0 2021-11-05 20:59:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-11-05 21:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-05 21:09:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:41 +0000 UTC,LastTransitionTime:2021-11-05 21:04:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:39 +0000 UTC,LastTransitionTime:2021-11-05 20:59:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:39 +0000 UTC,LastTransitionTime:2021-11-05 21:01:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0f1bc4b4acc1463992265eb9f006d2f4,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:d0e797a3-7d35-4e63-b584-b18006ef67fe,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:40.249: INFO: 
Logging kubelet events for node master2
Nov  6 01:03:40.251: INFO: 
Logging pods the kubelet thinks is on node master2
Nov  6 01:03:40.258: INFO: kube-controller-manager-master2 started at 2021-11-05 21:04:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:03:40.258: INFO: kube-proxy-9vm9v started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-proxy ready: true, restart count 1
Nov  6 01:03:40.258: INFO: kube-flannel-g7q4k started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:03:40.258: INFO: 	Container kube-flannel ready: true, restart count 3
Nov  6 01:03:40.258: INFO: node-exporter-8mxjv started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:40.258: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:40.258: INFO: kube-apiserver-master2 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:40.258: INFO: kube-scheduler-master2 started at 2021-11-05 21:08:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:03:40.258: INFO: kube-multus-ds-amd64-m5646 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:40.258: INFO: node-feature-discovery-controller-cff799f9f-8cg9j started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.258: INFO: 	Container nfd-controller ready: true, restart count 0
W1106 01:03:40.272119      25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:40.334: INFO: 
Latency metrics for node master2
Nov  6 01:03:40.334: INFO: 
Logging node info for node master3
Nov  6 01:03:40.339: INFO: Node Info: &Node{ObjectMeta:{master3    d3395dfc-1d8f-4527-88b4-7f472f6a6c0f 82196 0 2021-11-05 20:59:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-05 20:59:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-05 21:01:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-05 21:01:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-05 21:12:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:26 +0000 UTC,LastTransitionTime:2021-11-05 21:04:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 20:59:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:34 +0000 UTC,LastTransitionTime:2021-11-05 21:04:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:006015d4e2a7441aa293fbb9db938e38,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a0f65291-184f-4994-a7ea-d1a5b4d71ffa,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:40.339: INFO: 
Logging kubelet events for node master3
Nov  6 01:03:40.341: INFO: 
Logging pods the kubelet thinks is on node master3
Nov  6 01:03:40.350: INFO: kube-apiserver-master3 started at 2021-11-05 21:04:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov  6 01:03:40.350: INFO: kube-controller-manager-master3 started at 2021-11-05 21:00:02 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov  6 01:03:40.350: INFO: kube-flannel-f55xz started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Init container install-cni ready: true, restart count 0
Nov  6 01:03:40.350: INFO: 	Container kube-flannel ready: true, restart count 1
Nov  6 01:03:40.350: INFO: coredns-8474476ff8-qbn9j started at 2021-11-05 21:02:10 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container coredns ready: true, restart count 1
Nov  6 01:03:40.350: INFO: node-exporter-mqcvx started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:40.350: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:40.350: INFO: kube-scheduler-master3 started at 2021-11-05 21:08:19 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-scheduler ready: true, restart count 3
Nov  6 01:03:40.350: INFO: kube-proxy-s2pzt started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:40.350: INFO: kube-multus-ds-amd64-cp25f started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:40.350: INFO: dns-autoscaler-7df78bfcfb-z9dxm started at 2021-11-05 21:02:12 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.350: INFO: 	Container autoscaler ready: true, restart count 1
W1106 01:03:40.362589      25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:40.427: INFO: 
Latency metrics for node master3
Nov  6 01:03:40.427: INFO: 
Logging node info for node node1
Nov  6 01:03:40.430: INFO: Node Info: &Node{ObjectMeta:{node1    290b18e7-da33-4da8-b78a-8a7f28c49abf 82203 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:53:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:40 +0000 UTC,LastTransitionTime:2021-11-05 21:04:40 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:36 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f2fc144f1734ec29780a435d0602675,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:7c24c54c-15ba-4c20-b196-32ad0c82be71,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:50c0d65dc6b2d7618e01dd2fb431c2312ad7490d0461d040b112b04809b07401 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:40.431: INFO: 
Logging kubelet events for node node1
Nov  6 01:03:40.432: INFO: 
Logging pods the kubelet thinks is on node node1
Nov  6 01:03:40.452: INFO: up-down-2-mvxp8 started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:40.452: INFO: kube-flannel-hxwks started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Init container install-cni ready: true, restart count 2
Nov  6 01:03:40.452: INFO: 	Container kube-flannel ready: true, restart count 3
Nov  6 01:03:40.452: INFO: nodeport-update-service-n26rr started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:03:40.452: INFO: netserver-0 started at 2021-11-06 01:01:49 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:40.452: INFO: kube-multus-ds-amd64-mqrl8 started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:40.452: INFO: cmk-init-discover-node1-nnkks started at 2021-11-05 21:13:04 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:03:40.452: INFO: 	Container init ready: false, restart count 0
Nov  6 01:03:40.452: INFO: 	Container install ready: false, restart count 0
Nov  6 01:03:40.452: INFO: node-exporter-fvksz started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:40.452: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s started at 2021-11-05 21:17:51 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container tas-extender ready: true, restart count 0
Nov  6 01:03:40.452: INFO: execpodwsgzw started at 2021-11-06 01:01:33 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:03:40.452: INFO: host-test-container-pod started at 2021-11-06 01:02:11 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container agnhost-container ready: true, restart count 0
Nov  6 01:03:40.452: INFO: kube-proxy-mc4cs started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:40.452: INFO: cmk-cfm9r started at 2021-11-05 21:13:47 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:03:40.452: INFO: prometheus-k8s-0 started at 2021-11-05 21:14:58 +0000 UTC (0+4 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container config-reloader ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container grafana ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container prometheus ready: true, restart count 1
Nov  6 01:03:40.452: INFO: up-down-2-kmphz started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:40.452: INFO: nginx-proxy-node1 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:03:40.452: INFO: kubernetes-dashboard-785dcbb76d-9wtdz started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov  6 01:03:40.452: INFO: cmk-webhook-6c9d5f8578-wq5mk started at 2021-11-05 21:13:47 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov  6 01:03:40.452: INFO: collectd-5k6s9 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:03:40.452: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov  6 01:03:40.452: INFO: node-feature-discovery-worker-spmbf started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:03:40.452: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:40.452: INFO: 	Container kube-sriovdp ready: true, restart count 0
W1106 01:03:40.466134      25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:41.165: INFO: 
Latency metrics for node node1
Nov  6 01:03:41.165: INFO: 
Logging node info for node node2
Nov  6 01:03:41.169: INFO: Node Info: &Node{ObjectMeta:{node2    7d7e71f0-82d7-49ba-b69a-56600dd59b3f 82198 0 2021-11-05 21:00:39 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-05 21:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-05 21:01:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-05 21:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-05 21:13:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-05 23:54:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-06 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-05 21:04:43 +0000 UTC,LastTransitionTime:2021-11-05 21:04:43 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:00:39 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-06 01:03:35 +0000 UTC,LastTransitionTime:2021-11-05 21:01:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:415d65c0f8484c488059b324e675b5bd,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c5482a76-3a9a-45bb-ab12-c74550bfe71f,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[localhost:30500/cmk@sha256:8c85a99f0d621f4580064e48909402b99a2e98aff1c1e4164e4e52d46568c5be localhost:30500/cmk:v1.5.1],SizeBytes:724468800,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:e30d246f33998514fb41fc34b11174e739efb54af715b53723b81cc5d2ebd29d localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov  6 01:03:41.170: INFO: 
Logging kubelet events for node node2
Nov  6 01:03:41.172: INFO: 
Logging pods the kubelet thinks is on node node2
Nov  6 01:03:41.184: INFO: up-down-3-l8nlv started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.184: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:41.185: INFO: verify-service-up-host-exec-pod started at  (0+0 container statuses recorded)
Nov  6 01:03:41.185: INFO: cmk-bnvd2 started at 2021-11-05 21:13:46 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container nodereport ready: true, restart count 0
Nov  6 01:03:41.185: INFO: 	Container reconcile ready: true, restart count 0
Nov  6 01:03:41.185: INFO: prometheus-operator-585ccfb458-vh55q started at 2021-11-05 21:14:41 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:41.185: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov  6 01:03:41.185: INFO: test-container-pod started at 2021-11-06 01:02:11 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:41.185: INFO: nginx-proxy-node2 started at 2021-11-05 21:00:39 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov  6 01:03:41.185: INFO: node-feature-discovery-worker-pn6cr started at 2021-11-05 21:09:34 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container nfd-worker ready: true, restart count 0
Nov  6 01:03:41.185: INFO: netserver-1 started at 2021-11-06 01:01:49 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container webserver ready: true, restart count 0
Nov  6 01:03:41.185: INFO: cmk-init-discover-node2-9svdd started at 2021-11-05 21:13:24 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container discover ready: false, restart count 0
Nov  6 01:03:41.185: INFO: 	Container init ready: false, restart count 0
Nov  6 01:03:41.185: INFO: 	Container install ready: false, restart count 0
Nov  6 01:03:41.185: INFO: node-exporter-k7p79 started at 2021-11-05 21:14:48 +0000 UTC (0+2 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov  6 01:03:41.185: INFO: 	Container node-exporter ready: true, restart count 0
Nov  6 01:03:41.185: INFO: nodeport-update-service-jqdx5 started at 2021-11-06 01:01:15 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov  6 01:03:41.185: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg started at 2021-11-05 21:02:14 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov  6 01:03:41.185: INFO: up-down-3-pmm2b started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:41.185: INFO: kube-flannel-cqj7j started at 2021-11-05 21:01:36 +0000 UTC (1+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Init container install-cni ready: true, restart count 1
Nov  6 01:03:41.185: INFO: 	Container kube-flannel ready: true, restart count 2
Nov  6 01:03:41.185: INFO: up-down-2-c8ng6 started at 2021-11-06 01:02:28 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container up-down-2 ready: true, restart count 0
Nov  6 01:03:41.185: INFO: kube-proxy-j9lmg started at 2021-11-05 21:00:42 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kube-proxy ready: true, restart count 2
Nov  6 01:03:41.185: INFO: up-down-3-bfkm8 started at 2021-11-06 01:03:18 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container up-down-3 ready: true, restart count 0
Nov  6 01:03:41.185: INFO: kube-multus-ds-amd64-p7bxx started at 2021-11-05 21:01:44 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kube-multus ready: true, restart count 1
Nov  6 01:03:41.185: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p started at 2021-11-05 21:10:45 +0000 UTC (0+1 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov  6 01:03:41.185: INFO: collectd-r2g57 started at 2021-11-05 21:18:40 +0000 UTC (0+3 container statuses recorded)
Nov  6 01:03:41.185: INFO: 	Container collectd ready: true, restart count 0
Nov  6 01:03:41.185: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov  6 01:03:41.185: INFO: 	Container rbac-proxy ready: true, restart count 0
W1106 01:03:41.197049      25 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov  6 01:03:41.467: INFO: 
Latency metrics for node node2
Nov  6 01:03:41.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8943" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [146.061 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Nov  6 01:03:40.106: Unexpected error:
      <*errors.errorString | 0xc004497000>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31925 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31925 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":175,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Nov  6 01:03:41.484: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov  6 01:02:22.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-7254
STEP: creating service up-down-1 in namespace services-7254
STEP: creating replication controller up-down-1 in namespace services-7254
I1106 01:02:22.499917      33 runners.go:190] Created replication controller with name: up-down-1, namespace: services-7254, replica count: 3
I1106 01:02:25.551777      33 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:02:28.553477      33 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-7254
STEP: creating service up-down-2 in namespace services-7254
STEP: creating replication controller up-down-2 in namespace services-7254
I1106 01:02:28.565912      33 runners.go:190] Created replication controller with name: up-down-2, namespace: services-7254, replica count: 3
I1106 01:02:31.617888      33 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:02:34.618640      33 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Nov  6 01:02:34.621: INFO: Creating new host exec pod
Nov  6 01:02:34.639: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:36.643: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:02:36.643: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:44.662: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.154:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-host-exec-pod
Nov  6 01:02:44.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.154:80 2>&1 || true; echo; done'
Nov  6 01:02:45.124: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n"
Nov  6 01:02:45.124: INFO: stdout: "up-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\n"
Nov  6 01:02:45.124: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.154:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-exec-pod-jfl9p
Nov  6 01:02:45.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-exec-pod-jfl9p -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.154:80 2>&1 || true; echo; done'
Nov  6 01:02:45.478: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.154:80\n+ echo\n"
Nov  6 01:02:45.479: INFO: stdout: "up-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-7dp6p\nup-down-1-dj5kb\nup-down-1-dj5kb\nup-down-1-2njbl\nup-down-1-7dp6p\nup-down-1-7dp6p\nup-down-1-2njbl\nup-down-1-2njbl\nup-down-1-dj5kb\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7254
STEP: Deleting pod verify-service-up-exec-pod-jfl9p in namespace services-7254
STEP: verifying service up-down-2 is up
Nov  6 01:02:45.491: INFO: Creating new host exec pod
Nov  6 01:02:45.504: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:02:47.507: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:02:47.507: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:02:51.526: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-host-exec-pod
Nov  6 01:02:51.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:02:51.924: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:02:51.925: INFO: stdout: "up-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\n"
Nov  6 01:02:51.925: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-exec-pod-4c7n7
Nov  6 01:02:51.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-exec-pod-4c7n7 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:02:52.329: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:02:52.330: INFO: stdout: "up-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7254
STEP: Deleting pod verify-service-up-exec-pod-4c7n7 in namespace services-7254
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-7254, will wait for the garbage collector to delete the pods
Nov  6 01:02:52.399: INFO: Deleting ReplicationController up-down-1 took: 3.574626ms
Nov  6 01:02:52.499: INFO: Terminating ReplicationController up-down-1 pods took: 100.263514ms
STEP: verifying service up-down-1 is not up
Nov  6 01:03:00.010: INFO: Creating new host exec pod
Nov  6 01:03:00.027: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:02.031: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:04.032: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:06.031: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov  6 01:03:06.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.154:80 && echo service-down-failed'
Nov  6 01:03:09.368: INFO: rc: 28
Nov  6 01:03:09.368: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.36.154:80 && echo service-down-failed" in pod services-7254/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.154:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.36.154:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7254
STEP: verifying service up-down-2 is still up
Nov  6 01:03:09.376: INFO: Creating new host exec pod
Nov  6 01:03:09.394: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:11.398: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:13.398: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:03:13.398: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:03:17.420: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-host-exec-pod
Nov  6 01:03:17.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:03:17.825: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:03:17.825: INFO: stdout: "up-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\n"
Nov  6 01:03:17.826: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-exec-pod-zclqj
Nov  6 01:03:17.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-exec-pod-zclqj -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:03:18.206: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:03:18.207: INFO: stdout: "up-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7254
STEP: Deleting pod verify-service-up-exec-pod-zclqj in namespace services-7254
STEP: creating service up-down-3 in namespace services-7254
STEP: creating service up-down-3 in namespace services-7254
STEP: creating replication controller up-down-3 in namespace services-7254
I1106 01:03:18.228721      33 runners.go:190] Created replication controller with name: up-down-3, namespace: services-7254, replica count: 3
I1106 01:03:21.280614      33 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1106 01:03:24.283084      33 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Nov  6 01:03:24.285: INFO: Creating new host exec pod
Nov  6 01:03:24.303: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:26.306: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:03:26.306: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:03:30.328: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-host-exec-pod
Nov  6 01:03:30.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:03:30.696: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:03:30.697: INFO: stdout: "up-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\n"
Nov  6 01:03:30.697: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-exec-pod-4nx8z
Nov  6 01:03:30.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-exec-pod-4nx8z -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.45:80 2>&1 || true; echo; done'
Nov  6 01:03:31.089: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.45:80\n+ echo\n"
Nov  6 01:03:31.090: INFO: stdout: "up-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-c8ng6\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-kmphz\nup-down-2-mvxp8\nup-down-2-mvxp8\nup-down-2-c8ng6\nup-down-2-c8ng6\nup-down-2-c8ng6\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7254
STEP: Deleting pod verify-service-up-exec-pod-4nx8z in namespace services-7254
STEP: verifying service up-down-3 is up
Nov  6 01:03:31.103: INFO: Creating new host exec pod
Nov  6 01:03:31.125: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:33.128: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:35.130: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:37.129: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:39.127: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:41.128: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:43.130: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:45.129: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:47.130: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov  6 01:03:49.132: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov  6 01:03:49.132: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov  6 01:03:53.151: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.234:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-host-exec-pod
Nov  6 01:03:53.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.234:80 2>&1 || true; echo; done'
Nov  6 01:03:53.488: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n"
Nov  6 01:03:53.488: INFO: stdout: "up-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\n"
Nov  6 01:03:53.488: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.234:80 2>&1 || true; echo; done" in pod services-7254/verify-service-up-exec-pod-75xxj
Nov  6 01:03:53.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7254 exec verify-service-up-exec-pod-75xxj -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.234:80 2>&1 || true; echo; done'
Nov  6 01:03:53.809: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.234:80\n+ echo\n"
Nov  6 01:03:53.810: INFO: stdout: "up-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-l8nlv\nup-down-3-pmm2b\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-l8nlv\nup-down-3-bfkm8\nup-down-3-pmm2b\nup-down-3-bfkm8\nup-down-3-bfkm8\nup-down-3-pmm2b\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7254
STEP: Deleting pod verify-service-up-exec-pod-75xxj in namespace services-7254
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov  6 01:03:53.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7254" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:91.365 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":4,"skipped":796,"failed":0}
Nov  6 01:03:53.842: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":-1,"completed":5,"skipped":622,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for node-Service: http"]}
Nov  6 01:03:39.717: INFO: Running AfterSuite actions on all nodes
Nov  6 01:03:53.890: INFO: Running AfterSuite actions on node 1
Nov  6 01:03:53.890: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for node-Service: http 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 27 of 5770 Specs in 206.387 seconds
FAIL! -- 24 Passed | 3 Failed | 0 Pending | 5743 Skipped


Ginkgo ran 1 suite in 3m28.061686453s
Test Suite Failed