Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635566238 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 30 03:57:19.990: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:19.992: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 03:57:20.019: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 03:57:20.088: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 03:57:20.088: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 03:57:20.088: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 03:57:20.088: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 03:57:20.088: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 03:57:20.106: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 03:57:20.106: INFO: e2e test version: v1.21.5 Oct 30 03:57:20.107: INFO: kube-apiserver version: v1.21.1 Oct 30 03:57:20.108: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.113: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.111: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.133: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.120: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.137: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.120: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.141: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.121: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.143: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.120: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.143: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.126: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.148: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.134: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.155: INFO: Cluster IP family: ipv4 Oct 30 03:57:20.140: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.163: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 30 03:57:20.176: INFO: >>> kubeConfig: /root/.kube/config Oct 30 03:57:20.198: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:57:20.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W1030 03:57:20.393466 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:57:20.393: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:57:20.395: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Oct 30 03:57:20.398: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 03:57:20.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-9222" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should handle updates to ExternalTrafficPolicy field [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 03:57:20.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W1030 03:57:20.348016 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 03:57:20.348: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 03:57:20.350: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Oct 30 03:57:20.367: INFO: (0) /api/v1/nodes/node2/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1030 03:57:20.576798      24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.577: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.578: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-3378
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-3378
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:20.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3378" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•SSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":116,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:21.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
W1030 03:57:21.070214      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:21.070: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:21.072: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 30 03:57:21.074: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:21.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-4465" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:21.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Oct 30 03:57:23.225: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1030 03:57:20.701496      36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.701: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.703: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Oct 30 03:57:20.820: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-8527" to be "Succeeded or Failed"
Oct 30 03:57:20.823: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403505ms
Oct 30 03:57:22.827: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006595424s
Oct 30 03:57:24.831: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010127468s
Oct 30 03:57:26.835: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014596893s
Oct 30 03:57:28.840: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01905287s
Oct 30 03:57:30.845: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024381183s
STEP: Saw pod success
Oct 30 03:57:30.845: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:30.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8527" for this suite.


• [SLOW TEST:10.175 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":183,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1030 03:57:20.197988      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.198: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.199: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Oct 30 03:57:20.215: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.219: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.219: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.219: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 30 03:57:26.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1860 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 30 03:57:27.141: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Oct 30 03:57:27.141: INFO: stdout: "iptables"
Oct 30 03:57:27.141: INFO: proxyMode: iptables
Oct 30 03:57:27.148: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 30 03:57:27.150: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-1860
Oct 30 03:57:27.155: INFO: sourceip-test cluster ip: 10.233.8.49
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Oct 30 03:57:27.173: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:29.177: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:31.177: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:33.177: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-1860 to expose endpoints map[echo-sourceip:[8080]]
Oct 30 03:57:33.186: INFO: successfully validated that service sourceip-test in namespace services-1860 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Oct 30 03:57:33.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 30 03:57:35.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163053, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163053, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163053, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163053, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5cd444966b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:57:37.202: INFO: Waiting up to 2m0s to get response from 10.233.8.49:8080
Oct 30 03:57:37.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1860 exec pause-pod-5cd444966b-qr4ht -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.8.49:8080/clientip'
Oct 30 03:57:37.440: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.8.49:8080/clientip\n"
Oct 30 03:57:37.440: INFO: stdout: "10.244.3.162:36268"
STEP: Verifying the preserved source ip
Oct 30 03:57:37.440: INFO: Waiting up to 2m0s to get response from 10.233.8.49:8080
Oct 30 03:57:37.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1860 exec pause-pod-5cd444966b-zbdd4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.8.49:8080/clientip'
Oct 30 03:57:37.675: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.8.49:8080/clientip\n"
Oct 30 03:57:37.675: INFO: stdout: "10.244.4.46:47696"
STEP: Verifying the preserved source ip
Oct 30 03:57:37.675: INFO: Deleting deployment
Oct 30 03:57:37.678: INFO: Cleaning up the echo server pod
Oct 30 03:57:37.686: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:37.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1860" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:17.524 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:23.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Oct 30 03:57:33.604: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 30 03:57:33.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-test2bn4h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 30 03:57:33.840: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 30 03:57:33.840: INFO: stdout: "10.244.2.11:46836"
STEP: Verifying the preserved source ip
Oct 30 03:57:33.840: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
Oct 30 03:57:33.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-test2bn4h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
Oct 30 03:57:34.098: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
Oct 30 03:57:34.098: INFO: stdout: "10.244.2.11:48074"
STEP: Verifying the preserved source ip
Oct 30 03:57:34.098: INFO: Waiting up to 2m0s to get response from 10.244.4.43:8080
Oct 30 03:57:34.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-test2bn4h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip'
Oct 30 03:57:34.349: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip\n"
Oct 30 03:57:34.349: INFO: stdout: "10.244.2.11:59960"
STEP: Verifying the preserved source ip
Oct 30 03:57:34.349: INFO: Waiting up to 2m0s to get response from 10.244.3.160:8080
Oct 30 03:57:34.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-test2bn4h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip'
Oct 30 03:57:34.583: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip\n"
Oct 30 03:57:34.583: INFO: stdout: "10.244.2.11:57544"
STEP: Verifying the preserved source ip
Oct 30 03:57:34.583: INFO: Waiting up to 2m0s to get response from 10.244.2.11:8080
Oct 30 03:57:34.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testb86vf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip'
Oct 30 03:57:34.830: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip\n"
Oct 30 03:57:34.830: INFO: stdout: "10.244.0.10:40190"
STEP: Verifying the preserved source ip
Oct 30 03:57:34.830: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
Oct 30 03:57:34.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testb86vf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
Oct 30 03:57:35.090: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
Oct 30 03:57:35.090: INFO: stdout: "10.244.0.10:34554"
STEP: Verifying the preserved source ip
Oct 30 03:57:35.090: INFO: Waiting up to 2m0s to get response from 10.244.4.43:8080
Oct 30 03:57:35.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testb86vf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip'
Oct 30 03:57:35.317: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip\n"
Oct 30 03:57:35.317: INFO: stdout: "10.244.0.10:43860"
STEP: Verifying the preserved source ip
Oct 30 03:57:35.317: INFO: Waiting up to 2m0s to get response from 10.244.3.160:8080
Oct 30 03:57:35.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testb86vf -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip'
Oct 30 03:57:35.534: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip\n"
Oct 30 03:57:35.534: INFO: stdout: "10.244.0.10:58496"
STEP: Verifying the preserved source ip
Oct 30 03:57:35.534: INFO: Waiting up to 2m0s to get response from 10.244.2.11:8080
Oct 30 03:57:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testcf6n5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip'
Oct 30 03:57:35.764: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip\n"
Oct 30 03:57:35.764: INFO: stdout: "10.244.1.5:44736"
STEP: Verifying the preserved source ip
Oct 30 03:57:35.765: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 30 03:57:35.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testcf6n5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 30 03:57:35.991: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 30 03:57:35.991: INFO: stdout: "10.244.1.5:60506"
STEP: Verifying the preserved source ip
Oct 30 03:57:35.991: INFO: Waiting up to 2m0s to get response from 10.244.4.43:8080
Oct 30 03:57:35.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testcf6n5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip'
Oct 30 03:57:36.231: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip\n"
Oct 30 03:57:36.231: INFO: stdout: "10.244.1.5:54164"
STEP: Verifying the preserved source ip
Oct 30 03:57:36.231: INFO: Waiting up to 2m0s to get response from 10.244.3.160:8080
Oct 30 03:57:36.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testcf6n5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip'
Oct 30 03:57:36.459: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip\n"
Oct 30 03:57:36.459: INFO: stdout: "10.244.1.5:57996"
STEP: Verifying the preserved source ip
Oct 30 03:57:36.459: INFO: Waiting up to 2m0s to get response from 10.244.2.11:8080
Oct 30 03:57:36.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testrz88h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip'
Oct 30 03:57:36.727: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip\n"
Oct 30 03:57:36.727: INFO: stdout: "10.244.4.43:56672"
STEP: Verifying the preserved source ip
Oct 30 03:57:36.727: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 30 03:57:36.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testrz88h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 30 03:57:36.988: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 30 03:57:36.988: INFO: stdout: "10.244.4.43:54420"
STEP: Verifying the preserved source ip
Oct 30 03:57:36.988: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
Oct 30 03:57:36.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testrz88h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
Oct 30 03:57:37.247: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
Oct 30 03:57:37.248: INFO: stdout: "10.244.4.43:42500"
STEP: Verifying the preserved source ip
Oct 30 03:57:37.248: INFO: Waiting up to 2m0s to get response from 10.244.3.160:8080
Oct 30 03:57:37.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testrz88h -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip'
Oct 30 03:57:37.520: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.160:8080/clientip\n"
Oct 30 03:57:37.520: INFO: stdout: "10.244.4.43:37534"
STEP: Verifying the preserved source ip
Oct 30 03:57:37.520: INFO: Waiting up to 2m0s to get response from 10.244.2.11:8080
Oct 30 03:57:37.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testzhgmq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip'
Oct 30 03:57:37.813: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.11:8080/clientip\n"
Oct 30 03:57:37.813: INFO: stdout: "10.244.3.160:46530"
STEP: Verifying the preserved source ip
Oct 30 03:57:37.813: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 30 03:57:37.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testzhgmq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 30 03:57:38.267: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 30 03:57:38.267: INFO: stdout: "10.244.3.160:46344"
STEP: Verifying the preserved source ip
Oct 30 03:57:38.267: INFO: Waiting up to 2m0s to get response from 10.244.1.5:8080
Oct 30 03:57:38.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testzhgmq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip'
Oct 30 03:57:38.622: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.5:8080/clientip\n"
Oct 30 03:57:38.622: INFO: stdout: "10.244.3.160:46302"
STEP: Verifying the preserved source ip
Oct 30 03:57:38.622: INFO: Waiting up to 2m0s to get response from 10.244.4.43:8080
Oct 30 03:57:38.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-5172 exec no-snat-testzhgmq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip'
Oct 30 03:57:38.932: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.43:8080/clientip\n"
Oct 30 03:57:38.932: INFO: stdout: "10.244.3.160:39270"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:38.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-5172" for this suite.


• [SLOW TEST:15.450 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":3,"skipped":326,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-1652
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:20.632: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:20.662: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:28.664: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:30.666: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:32.666: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:34.666: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:36.665: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:38.665: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:40.666: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:42.667: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:42.673: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:48.708: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:48.708: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:48.716: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:48.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1652" for this suite.


S [SKIPPING] [28.240 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1030 03:57:20.581469      31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.581: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.583: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-3913
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:20.695: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:20.726: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.729: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.730: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:28.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:30.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:32.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:34.730: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:36.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:38.729: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:40.729: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:40.733: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:57:42.740: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:52.761: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:52.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:52.768: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:52.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3913" for this suite.


S [SKIPPING] [32.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1030 03:57:20.634774      34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.635: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.636: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-7989
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:20.754: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:20.787: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:28.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:30.792: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:32.790: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:34.790: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:36.791: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:38.791: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:40.792: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:40.798: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:57:42.803: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:52.836: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:52.836: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:52.843: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7989" for this suite.


S [SKIPPING] [32.240 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1030 03:57:20.190574      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.190: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.194: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-4757
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:20.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:20.340: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.345: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.345: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.343: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:28.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:30.342: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:32.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:34.344: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:36.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:38.345: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:40.344: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:40.349: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:57:42.355: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:54.378: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:54.378: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:54.386: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:54.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4757" for this suite.


S [SKIPPING] [34.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-2425
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:20.942: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:20.972: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:22.976: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:24.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:26.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:28.976: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:30.976: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:32.977: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:34.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:36.977: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:38.975: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:40.975: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:40.979: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:57:42.984: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:55.002: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:55.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:55.009: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:55.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2425" for this suite.


S [SKIPPING] [34.212 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:21.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-7493
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:21.300: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:21.333: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:23.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:25.337: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:27.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:29.337: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:31.336: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:33.336: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:35.336: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:37.337: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:39.337: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:41.336: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:43.336: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:57:43.341: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:57:53.375: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:57:53.375: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Oct 30 03:57:53.395: INFO: Service node-port-service in namespace nettest-7493 found.
Oct 30 03:57:53.408: INFO: Service session-affinity-service in namespace nettest-7493 found.
STEP: Waiting for NodePort service to expose endpoint
Oct 30 03:57:54.411: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Oct 30 03:57:55.414: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: checking kube-proxy URLs
STEP: Getting kube-proxy self URL /healthz
Oct 30 03:57:55.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-7493 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Oct 30 03:57:55.673: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Oct 30 03:57:55.673: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Sat, 30 Oct 2021 03:57:55 GMT\r\nContent-Length: 153\r\n\r\n{\"lastUpdated\": \"2021-10-30 03:57:55.663302588 +0000 UTC m=+24417.028989650\",\"currentTime\": \"2021-10-30 03:57:55.663302588 +0000 UTC m=+24417.028989650\"}"
STEP: Getting kube-proxy self URL /healthz
Oct 30 03:57:55.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-7493 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Oct 30 03:57:56.046: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Oct 30 03:57:56.046: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Sat, 30 Oct 2021 03:57:56 GMT\r\nContent-Length: 153\r\n\r\n{\"lastUpdated\": \"2021-10-30 03:57:56.037414641 +0000 UTC m=+24417.403101704\",\"currentTime\": \"2021-10-30 03:57:56.037414641 +0000 UTC m=+24417.403101704\"}"
STEP: Checking status code against http://localhost:10249/proxyMode
Oct 30 03:57:56.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-7493 exec host-test-container-pod -- /bin/sh -x -c curl -o /dev/null -i -q -s -w %{http_code} --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 30 03:57:56.901: INFO: stderr: "+ curl -o /dev/null -i -q -s -w '%{http_code}' --connect-timeout 1 http://localhost:10249/proxyMode\n"
Oct 30 03:57:56.901: INFO: stdout: "200"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:56.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7493" for this suite.


• [SLOW TEST:35.779 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":1,"skipped":400,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:56.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Oct 30 03:57:56.937: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:56.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9013" for this suite.


S [SKIPPING] [0.030 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:54.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-8558
STEP: deleting original service nodeport-reuse
Oct 30 03:57:54.543: INFO: Creating new host exec pod
Oct 30 03:57:54.556: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:56.561: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:58.560: INFO: The status of Pod hostexec is Running (Ready = true)
Oct 30 03:57:58.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8558 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30218' | tail -n +2 | grep LISTEN'
Oct 30 03:57:58.996: INFO: stderr: "+ ss -ant46 'sport = :30218'\n+ tail -n +2\n+ grep LISTEN\n"
Oct 30 03:57:58.996: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 30218
STEP: deleting service nodeport-reuse in namespace services-8558
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:57:59.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8558" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":54,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:52.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-1173
Oct 30 03:57:52.874: INFO: hairpin-test cluster ip: 10.233.55.203
STEP: creating a client/server pod
Oct 30 03:57:52.892: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:54.895: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:56.896: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:58.895: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:00.895: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:02.896: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:04.896: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-1173 to expose endpoints map[hairpin:[8080]]
Oct 30 03:58:04.903: INFO: successfully validated that service hairpin-test in namespace services-1173 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct 30 03:58:05.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1173 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 30 03:58:06.206: INFO: stderr: "+ nc -v -t -w 2 hairpin-test 8080\n+ echo hostName\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Oct 30 03:58:06.206: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Oct 30 03:58:06.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1173 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.55.203 8080'
Oct 30 03:58:06.458: INFO: stderr: "+ nc -v -t -w 2 10.233.55.203 8080\nConnection to 10.233.55.203 8080 port [tcp/http-alt] succeeded!\n+ echo hostName\n"
Oct 30 03:58:06.459: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:06.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1173" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.621 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":162,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:38.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-2915
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:39.124: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:39.155: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:41.159: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:43.160: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:45.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:47.161: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:49.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:51.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:53.162: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:55.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:57.159: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:57:59.160: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:01.159: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:01.164: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:11.187: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:11.187: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:11.194: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:11.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2915" for this suite.


S [SKIPPING] [32.216 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:59.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Oct 30 03:57:59.138: INFO: Creating new exec pod
Oct 30 03:58:11.158: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Oct 30 03:58:11.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8163 exec execpod-noendpointsztlrm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 30 03:58:12.602: INFO: rc: 1
Oct 30 03:58:12.602: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8163 exec execpod-noendpointsztlrm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:12.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8163" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:13.507 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":72,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:12.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Oct 30 03:58:12.807: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Oct 30 03:58:12.912: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:12.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3492" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.146 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:13.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-3747
STEP: changing service nodeport-range-test to out-of-range NodePort 50863
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 50863
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:13.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3747" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":3,"skipped":598,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:13.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 30 03:58:13.915: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:13.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-9527" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:11.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6220.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6220.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6220.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6220.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6220.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 30 03:58:17.574: INFO: DNS probes using dns-6220/dns-test-98387c6d-0162-4b03-ba75-d764a794834a succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:17.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6220" for this suite.


• [SLOW TEST:6.112 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":4,"skipped":478,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:53.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-8626
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:53.205: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:53.237: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:55.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:57.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:59.240: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:01.241: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:03.242: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:05.240: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:07.240: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:09.241: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:11.240: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:13.240: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:15.241: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:15.248: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:21.271: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:21.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:21.277: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:21.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8626" for this suite.


S [SKIPPING] [28.235 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:55.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-1628
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:55.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:55.302: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:57.305: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:59.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:01.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:03.307: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:05.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:07.307: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:09.305: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:11.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:13.306: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:15.305: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:15.310: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:58:17.313: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:23.333: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:23.333: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:23.339: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:23.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1628" for this suite.


S [SKIPPING] [28.188 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:57.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-2494
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:57:57.255: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:57:57.291: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:59.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:01.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:03.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:05.294: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:07.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:09.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:11.296: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:13.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:15.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:17.295: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:19.294: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:19.298: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:25.318: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:25.318: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Oct 30 03:58:25.342: INFO: Service node-port-service in namespace nettest-2494 found.
Oct 30 03:58:25.356: INFO: Service session-affinity-service in namespace nettest-2494 found.
STEP: Waiting for NodePort service to expose endpoint
Oct 30 03:58:26.359: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Oct 30 03:58:27.361: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.233.19.75:90 (config.clusterIP)
Oct 30 03:58:27.368: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.69:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.233.19.75&port=90&tries=1'] Namespace:nettest-2494 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:58:27.368: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:58:27.466: INFO: Waiting for responses: map[]
Oct 30 03:58:27.466: INFO: reached 10.233.19.75 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2494" for this suite.


• [SLOW TEST:30.334 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":2,"skipped":504,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:21.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Oct 30 03:58:21.539: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-c26e283a-dab4-4ccc-b1fb-d40169ee3640  dns-7997  1d67943b-31fa-4c60-85f2-0088ef58cf3e 149128 0 2021-10-30 03:58:21 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-10-30 03:58:21 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-lv5gd,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-fdv5j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fdv5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 30 03:58:25.549: INFO: testServerIP is 10.244.3.180
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Oct 30 03:58:25.559: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-7997  4587f351-4942-40ac-b074-00e8e8e50fad 149235 0 2021-10-30 03:58:25 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-10-30 03:58:25 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cc9rt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cc9rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.180],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Oct 30 03:58:29.565: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-7997 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:58:29.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Oct 30 03:58:30.076: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-7997 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:58:30.076: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:58:30.170: INFO: Deleting pod e2e-dns-utils...
Oct 30 03:58:30.176: INFO: Deleting pod e2e-configmap-dns-server-c26e283a-dab4-4ccc-b1fb-d40169ee3640...
Oct 30 03:58:30.184: INFO: Deleting configmap e2e-coredns-configmap-lv5gd...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:30.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7997" for this suite.


• [SLOW TEST:8.689 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":361,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:14.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-5a0c258a-096b-4e14-b372-fe4d239b634e]
STEP: Verifying pods for RC slow-terminating-unready-pod
Oct 30 03:58:14.359: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
Oct 30 03:58:19.362: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Oct 30 03:58:19.370: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-zhfnv]: "NOW: 2021-10-30 03:58:19.369555279 +0000 UTC m=+2.515772128", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-1838.svc.cluster.local
Oct 30 03:58:19.370: INFO: Creating new exec pod
Oct 30 03:58:25.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1838 exec execpod-89cpg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/'
Oct 30 03:58:25.681: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/\n"
Oct 30 03:58:25.681: INFO: stdout: "NOW: 2021-10-30 03:58:25.675440162 +0000 UTC m=+8.821657010"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-1838 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Oct 30 03:58:30.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1838 exec execpod-89cpg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/; test "$?" -ne "0"'
Oct 30 03:58:31.042: INFO: rc: 1
Oct 30 03:58:31.042: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2021-10-30 03:58:31.035993466 +0000 UTC m=+14.182210315, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1838 exec execpod-89cpg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2021-10-30 03:58:31.035993466 +0000 UTC m=+14.182210315
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Oct 30 03:58:33.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1838 exec execpod-89cpg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/; test "$?" -ne "0"'
Oct 30 03:58:34.535: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Oct 30 03:58:34.535: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Oct 30 03:58:34.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1838 exec execpod-89cpg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/'
Oct 30 03:58:34.796: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-1838.svc.cluster.local:80/\n"
Oct 30 03:58:34.796: INFO: stdout: "NOW: 2021-10-30 03:58:34.787368507 +0000 UTC m=+17.933585356"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-1838
STEP: deleting service tolerate-unready in namespace services-1838
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:34.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1838" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:20.507 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":4,"skipped":824,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:07.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-4234
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:07.518: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:07.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:09.556: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:11.555: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:13.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:15.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:17.554: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:19.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:21.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:23.555: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:25.554: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:27.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:29.556: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:29.562: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:35.598: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:35.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:35.605: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:35.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4234" for this suite.


S [SKIPPING] [28.209 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:36.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 30 03:58:36.115: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:36.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7142" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:30.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Oct 30 03:57:30.944: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:32.947: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:34.949: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Oct 30 03:57:34.974: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:36.978: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:57:38.977: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Oct 30 03:58:39.333: INFO: boom-server pod logs: 2021/10/30 03:57:33 external ip: 10.244.4.45
2021/10/30 03:57:33 listen on 0.0.0.0:9000
2021/10/30 03:57:33 probing 10.244.4.45
2021/10/30 03:57:39 tcp packet: &{SrcPort:39594 DestPort:9000 Seq:4102080855 Ack:0 Flags:40962 WindowSize:29200 Checksum:1023 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:39 tcp packet: &{SrcPort:39594 DestPort:9000 Seq:4102080856 Ack:800523597 Flags:32784 WindowSize:229 Checksum:7619 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:39 connection established
2021/10/30 03:57:39 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 154 170 47 181 126 173 244 128 201 88 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:39 checksumer: &{sum:516996 oddByte:33 length:39}
2021/10/30 03:57:39 ret:  517029
2021/10/30 03:57:39 ret:  58284
2021/10/30 03:57:39 ret:  58284
2021/10/30 03:57:39 boom packet injected
2021/10/30 03:57:39 tcp packet: &{SrcPort:39594 DestPort:9000 Seq:4102080856 Ack:800523597 Flags:32785 WindowSize:229 Checksum:7618 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:41 tcp packet: &{SrcPort:46354 DestPort:9000 Seq:718233114 Ack:0 Flags:40962 WindowSize:29200 Checksum:5813 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:41 tcp packet: &{SrcPort:46354 DestPort:9000 Seq:718233115 Ack:3713999739 Flags:32784 WindowSize:229 Checksum:23761 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:41 connection established
2021/10/30 03:57:41 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 181 18 221 93 156 219 42 207 94 27 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:41 checksumer: &{sum:471862 oddByte:33 length:39}
2021/10/30 03:57:41 ret:  471895
2021/10/30 03:57:41 ret:  13150
2021/10/30 03:57:41 ret:  13150
2021/10/30 03:57:41 boom packet injected
2021/10/30 03:57:41 tcp packet: &{SrcPort:46354 DestPort:9000 Seq:718233115 Ack:3713999739 Flags:32785 WindowSize:229 Checksum:23760 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:43 tcp packet: &{SrcPort:37249 DestPort:9000 Seq:1880054262 Ack:0 Flags:40962 WindowSize:29200 Checksum:61771 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:43 tcp packet: &{SrcPort:37249 DestPort:9000 Seq:1880054263 Ack:677148498 Flags:32784 WindowSize:229 Checksum:37046 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:43 connection established
2021/10/30 03:57:43 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 145 129 40 90 240 178 112 15 89 247 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:43 checksumer: &{sum:496114 oddByte:33 length:39}
2021/10/30 03:57:43 ret:  496147
2021/10/30 03:57:43 ret:  37402
2021/10/30 03:57:43 ret:  37402
2021/10/30 03:57:43 boom packet injected
2021/10/30 03:57:43 tcp packet: &{SrcPort:37249 DestPort:9000 Seq:1880054263 Ack:677148498 Flags:32785 WindowSize:229 Checksum:37045 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:45 tcp packet: &{SrcPort:36399 DestPort:9000 Seq:2300792812 Ack:0 Flags:40962 WindowSize:29200 Checksum:56771 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:45 tcp packet: &{SrcPort:36399 DestPort:9000 Seq:2300792813 Ack:554631262 Flags:32784 WindowSize:229 Checksum:62367 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:45 connection established
2021/10/30 03:57:45 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 142 47 33 13 121 190 137 35 79 237 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:45 checksumer: &{sum:460928 oddByte:33 length:39}
2021/10/30 03:57:45 ret:  460961
2021/10/30 03:57:45 ret:  2216
2021/10/30 03:57:45 ret:  2216
2021/10/30 03:57:45 boom packet injected
2021/10/30 03:57:45 tcp packet: &{SrcPort:36399 DestPort:9000 Seq:2300792813 Ack:554631262 Flags:32785 WindowSize:229 Checksum:62366 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:47 tcp packet: &{SrcPort:33508 DestPort:9000 Seq:4110535756 Ack:0 Flags:40962 WindowSize:29200 Checksum:63742 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:47 tcp packet: &{SrcPort:33508 DestPort:9000 Seq:4110535757 Ack:48824052 Flags:32784 WindowSize:229 Checksum:9882 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:47 connection established
2021/10/30 03:57:47 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 130 228 2 231 120 84 245 1 204 77 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:47 checksumer: &{sum:486461 oddByte:33 length:39}
2021/10/30 03:57:47 ret:  486494
2021/10/30 03:57:47 ret:  27749
2021/10/30 03:57:47 ret:  27749
2021/10/30 03:57:47 boom packet injected
2021/10/30 03:57:47 tcp packet: &{SrcPort:33508 DestPort:9000 Seq:4110535757 Ack:48824052 Flags:32785 WindowSize:229 Checksum:9881 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:49 tcp packet: &{SrcPort:39594 DestPort:9000 Seq:4102080857 Ack:800523598 Flags:32784 WindowSize:229 Checksum:53150 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:49 tcp packet: &{SrcPort:45322 DestPort:9000 Seq:2297690088 Ack:0 Flags:40962 WindowSize:29200 Checksum:890 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:49 tcp packet: &{SrcPort:45322 DestPort:9000 Seq:2297690089 Ack:429897575 Flags:32784 WindowSize:229 Checksum:23066 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:49 connection established
2021/10/30 03:57:49 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 177 10 25 158 48 199 136 243 247 233 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:49 checksumer: &{sum:543225 oddByte:33 length:39}
2021/10/30 03:57:49 ret:  543258
2021/10/30 03:57:49 ret:  18978
2021/10/30 03:57:49 ret:  18978
2021/10/30 03:57:49 boom packet injected
2021/10/30 03:57:49 tcp packet: &{SrcPort:45322 DestPort:9000 Seq:2297690089 Ack:429897575 Flags:32785 WindowSize:229 Checksum:23065 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:51 tcp packet: &{SrcPort:46354 DestPort:9000 Seq:718233116 Ack:3713999740 Flags:32784 WindowSize:229 Checksum:3759 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:51 tcp packet: &{SrcPort:35687 DestPort:9000 Seq:4056903881 Ack:0 Flags:40962 WindowSize:29200 Checksum:16262 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:51 tcp packet: &{SrcPort:35687 DestPort:9000 Seq:4056903882 Ack:2589347207 Flags:32784 WindowSize:229 Checksum:30581 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:51 connection established
2021/10/30 03:57:51 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 139 103 154 84 198 231 241 207 112 202 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:51 checksumer: &{sum:539340 oddByte:33 length:39}
2021/10/30 03:57:51 ret:  539373
2021/10/30 03:57:51 ret:  15093
2021/10/30 03:57:51 ret:  15093
2021/10/30 03:57:51 boom packet injected
2021/10/30 03:57:51 tcp packet: &{SrcPort:35687 DestPort:9000 Seq:4056903882 Ack:2589347207 Flags:32785 WindowSize:229 Checksum:30580 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:53 tcp packet: &{SrcPort:37249 DestPort:9000 Seq:1880054264 Ack:677148499 Flags:32784 WindowSize:229 Checksum:17044 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:53 tcp packet: &{SrcPort:43513 DestPort:9000 Seq:2811144862 Ack:0 Flags:40962 WindowSize:29200 Checksum:10640 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:53 tcp packet: &{SrcPort:43513 DestPort:9000 Seq:2811144863 Ack:1240712435 Flags:32784 WindowSize:229 Checksum:12966 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:53 connection established
2021/10/30 03:57:53 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 169 249 73 242 62 83 167 142 170 159 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:53 checksumer: &{sum:551425 oddByte:33 length:39}
2021/10/30 03:57:53 ret:  551458
2021/10/30 03:57:53 ret:  27178
2021/10/30 03:57:53 ret:  27178
2021/10/30 03:57:53 boom packet injected
2021/10/30 03:57:53 tcp packet: &{SrcPort:43513 DestPort:9000 Seq:2811144863 Ack:1240712435 Flags:32785 WindowSize:229 Checksum:12965 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:55 tcp packet: &{SrcPort:36399 DestPort:9000 Seq:2300792814 Ack:554631263 Flags:32784 WindowSize:229 Checksum:42365 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:55 tcp packet: &{SrcPort:41655 DestPort:9000 Seq:2137943428 Ack:0 Flags:40962 WindowSize:29200 Checksum:36411 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:55 tcp packet: &{SrcPort:41655 DestPort:9000 Seq:2137943429 Ack:676310955 Flags:32784 WindowSize:229 Checksum:50796 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:55 connection established
2021/10/30 03:57:55 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 162 183 40 78 41 11 127 110 109 133 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:55 checksumer: &{sum:459103 oddByte:33 length:39}
2021/10/30 03:57:55 ret:  459136
2021/10/30 03:57:55 ret:  391
2021/10/30 03:57:55 ret:  391
2021/10/30 03:57:55 boom packet injected
2021/10/30 03:57:55 tcp packet: &{SrcPort:41655 DestPort:9000 Seq:2137943429 Ack:676310955 Flags:32785 WindowSize:229 Checksum:50795 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:57 tcp packet: &{SrcPort:33508 DestPort:9000 Seq:4110535758 Ack:48824053 Flags:32784 WindowSize:229 Checksum:55415 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:57 tcp packet: &{SrcPort:46673 DestPort:9000 Seq:1170407786 Ack:0 Flags:40962 WindowSize:29200 Checksum:6295 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:57 tcp packet: &{SrcPort:46673 DestPort:9000 Seq:1170407787 Ack:3024682066 Flags:32784 WindowSize:229 Checksum:28759 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:57 connection established
2021/10/30 03:57:57 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 182 81 180 71 117 178 69 195 1 107 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:57 checksumer: &{sum:489125 oddByte:33 length:39}
2021/10/30 03:57:57 ret:  489158
2021/10/30 03:57:57 ret:  30413
2021/10/30 03:57:57 ret:  30413
2021/10/30 03:57:57 boom packet injected
2021/10/30 03:57:57 tcp packet: &{SrcPort:46673 DestPort:9000 Seq:1170407787 Ack:3024682066 Flags:32785 WindowSize:229 Checksum:28758 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:59 tcp packet: &{SrcPort:45322 DestPort:9000 Seq:2297690090 Ack:429897576 Flags:32784 WindowSize:229 Checksum:3064 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:59 tcp packet: &{SrcPort:34719 DestPort:9000 Seq:2660429366 Ack:0 Flags:40962 WindowSize:29200 Checksum:63964 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:57:59 tcp packet: &{SrcPort:34719 DestPort:9000 Seq:2660429367 Ack:4145726606 Flags:32784 WindowSize:229 Checksum:16062 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:57:59 connection established
2021/10/30 03:57:59 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 135 159 247 25 61 238 158 146 238 55 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:57:59 checksumer: &{sum:487111 oddByte:33 length:39}
2021/10/30 03:57:59 ret:  487144
2021/10/30 03:57:59 ret:  28399
2021/10/30 03:57:59 ret:  28399
2021/10/30 03:57:59 boom packet injected
2021/10/30 03:57:59 tcp packet: &{SrcPort:34719 DestPort:9000 Seq:2660429367 Ack:4145726606 Flags:32785 WindowSize:229 Checksum:16061 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:01 tcp packet: &{SrcPort:35687 DestPort:9000 Seq:4056903883 Ack:2589347208 Flags:32784 WindowSize:229 Checksum:10579 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:01 tcp packet: &{SrcPort:46140 DestPort:9000 Seq:3329071434 Ack:0 Flags:40962 WindowSize:29200 Checksum:62079 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:01 tcp packet: &{SrcPort:46140 DestPort:9000 Seq:3329071435 Ack:1751075190 Flags:32784 WindowSize:229 Checksum:14693 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:01 connection established
2021/10/30 03:58:01 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 180 60 104 93 194 214 198 109 153 75 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:01 checksumer: &{sum:468669 oddByte:33 length:39}
2021/10/30 03:58:01 ret:  468702
2021/10/30 03:58:01 ret:  9957
2021/10/30 03:58:01 ret:  9957
2021/10/30 03:58:01 boom packet injected
2021/10/30 03:58:01 tcp packet: &{SrcPort:46140 DestPort:9000 Seq:3329071435 Ack:1751075190 Flags:32785 WindowSize:229 Checksum:14692 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:03 tcp packet: &{SrcPort:43513 DestPort:9000 Seq:2811144864 Ack:1240712436 Flags:32784 WindowSize:229 Checksum:58497 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:03 tcp packet: &{SrcPort:43515 DestPort:9000 Seq:3151847242 Ack:0 Flags:40962 WindowSize:29200 Checksum:14641 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:03 tcp packet: &{SrcPort:43515 DestPort:9000 Seq:3151847243 Ack:518055727 Flags:32784 WindowSize:229 Checksum:9148 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:03 connection established
2021/10/30 03:58:03 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 169 251 30 223 96 143 187 221 95 75 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:03 checksumer: &{sum:561089 oddByte:33 length:39}
2021/10/30 03:58:03 ret:  561122
2021/10/30 03:58:03 ret:  36842
2021/10/30 03:58:03 ret:  36842
2021/10/30 03:58:03 boom packet injected
2021/10/30 03:58:03 tcp packet: &{SrcPort:43515 DestPort:9000 Seq:3151847243 Ack:518055727 Flags:32785 WindowSize:229 Checksum:9147 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:05 tcp packet: &{SrcPort:41655 DestPort:9000 Seq:2137943430 Ack:676310956 Flags:32784 WindowSize:229 Checksum:30794 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:05 tcp packet: &{SrcPort:38513 DestPort:9000 Seq:2292706950 Ack:0 Flags:40962 WindowSize:29200 Checksum:59699 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:05 tcp packet: &{SrcPort:38513 DestPort:9000 Seq:2292706951 Ack:4164578140 Flags:32784 WindowSize:229 Checksum:28342 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:05 connection established
2021/10/30 03:58:05 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 150 113 248 56 228 188 136 167 238 135 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:05 checksumer: &{sum:496488 oddByte:33 length:39}
2021/10/30 03:58:05 ret:  496521
2021/10/30 03:58:05 ret:  37776
2021/10/30 03:58:05 ret:  37776
2021/10/30 03:58:05 boom packet injected
2021/10/30 03:58:05 tcp packet: &{SrcPort:38513 DestPort:9000 Seq:2292706951 Ack:4164578140 Flags:32785 WindowSize:229 Checksum:28341 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:07 tcp packet: &{SrcPort:46673 DestPort:9000 Seq:1170407788 Ack:3024682067 Flags:32784 WindowSize:229 Checksum:8755 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:07 tcp packet: &{SrcPort:32994 DestPort:9000 Seq:2507954141 Ack:0 Flags:40962 WindowSize:29200 Checksum:32966 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:07 tcp packet: &{SrcPort:32994 DestPort:9000 Seq:2507954142 Ack:139723892 Flags:32784 WindowSize:229 Checksum:21831 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:07 connection established
2021/10/30 03:58:07 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 128 226 8 82 125 212 149 124 87 222 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:07 checksumer: &{sum:548977 oddByte:33 length:39}
2021/10/30 03:58:07 ret:  549010
2021/10/30 03:58:07 ret:  24730
2021/10/30 03:58:07 ret:  24730
2021/10/30 03:58:07 boom packet injected
2021/10/30 03:58:07 tcp packet: &{SrcPort:32994 DestPort:9000 Seq:2507954142 Ack:139723892 Flags:32785 WindowSize:229 Checksum:21829 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:09 tcp packet: &{SrcPort:34719 DestPort:9000 Seq:2660429368 Ack:4145726607 Flags:32784 WindowSize:229 Checksum:61594 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:09 tcp packet: &{SrcPort:39276 DestPort:9000 Seq:1714177575 Ack:0 Flags:40962 WindowSize:29200 Checksum:41329 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:09 tcp packet: &{SrcPort:39276 DestPort:9000 Seq:1714177576 Ack:2402827917 Flags:32784 WindowSize:229 Checksum:44323 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:09 connection established
2021/10/30 03:58:09 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 153 108 143 54 183 237 102 44 70 40 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:09 checksumer: &{sum:451083 oddByte:33 length:39}
2021/10/30 03:58:09 ret:  451116
2021/10/30 03:58:09 ret:  57906
2021/10/30 03:58:09 ret:  57906
2021/10/30 03:58:09 boom packet injected
2021/10/30 03:58:09 tcp packet: &{SrcPort:39276 DestPort:9000 Seq:1714177576 Ack:2402827917 Flags:32785 WindowSize:229 Checksum:44322 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:11 tcp packet: &{SrcPort:46140 DestPort:9000 Seq:3329071436 Ack:1751075191 Flags:32784 WindowSize:229 Checksum:60226 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:11 tcp packet: &{SrcPort:44771 DestPort:9000 Seq:1545188128 Ack:0 Flags:40962 WindowSize:29200 Checksum:8516 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:11 tcp packet: &{SrcPort:44771 DestPort:9000 Seq:1545188129 Ack:3623084954 Flags:32784 WindowSize:229 Checksum:14172 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:11 connection established
2021/10/30 03:58:11 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 174 227 215 242 92 250 92 25 179 33 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:11 checksumer: &{sum:526448 oddByte:33 length:39}
2021/10/30 03:58:11 ret:  526481
2021/10/30 03:58:11 ret:  2201
2021/10/30 03:58:11 ret:  2201
2021/10/30 03:58:11 boom packet injected
2021/10/30 03:58:11 tcp packet: &{SrcPort:44771 DestPort:9000 Seq:1545188129 Ack:3623084954 Flags:32785 WindowSize:229 Checksum:14171 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:13 tcp packet: &{SrcPort:37383 DestPort:9000 Seq:829213866 Ack:0 Flags:40962 WindowSize:29200 Checksum:18290 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:13 tcp packet: &{SrcPort:37383 DestPort:9000 Seq:829213867 Ack:2704157192 Flags:32784 WindowSize:229 Checksum:17938 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:13 connection established
2021/10/30 03:58:13 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 146 7 161 44 163 104 49 108 204 171 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:13 checksumer: &{sum:438611 oddByte:33 length:39}
2021/10/30 03:58:13 ret:  438644
2021/10/30 03:58:13 ret:  45434
2021/10/30 03:58:13 ret:  45434
2021/10/30 03:58:13 boom packet injected
2021/10/30 03:58:13 tcp packet: &{SrcPort:37383 DestPort:9000 Seq:829213867 Ack:2704157192 Flags:32785 WindowSize:229 Checksum:17937 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:13 tcp packet: &{SrcPort:43515 DestPort:9000 Seq:3151847244 Ack:518055728 Flags:32784 WindowSize:229 Checksum:54681 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:15 tcp packet: &{SrcPort:38513 DestPort:9000 Seq:2292706952 Ack:4164578141 Flags:32784 WindowSize:229 Checksum:8339 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:15 tcp packet: &{SrcPort:33250 DestPort:9000 Seq:876643585 Ack:0 Flags:40962 WindowSize:29200 Checksum:38044 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:15 tcp packet: &{SrcPort:33250 DestPort:9000 Seq:876643586 Ack:2743632606 Flags:32784 WindowSize:229 Checksum:12348 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:15 connection established
2021/10/30 03:58:15 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 129 226 163 134 252 62 52 64 133 2 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:15 checksumer: &{sum:452441 oddByte:33 length:39}
2021/10/30 03:58:15 ret:  452474
2021/10/30 03:58:15 ret:  59264
2021/10/30 03:58:15 ret:  59264
2021/10/30 03:58:15 boom packet injected
2021/10/30 03:58:15 tcp packet: &{SrcPort:33250 DestPort:9000 Seq:876643586 Ack:2743632606 Flags:32785 WindowSize:229 Checksum:12347 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:17 tcp packet: &{SrcPort:32994 DestPort:9000 Seq:2507954143 Ack:139723893 Flags:32784 WindowSize:229 Checksum:1827 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:17 tcp packet: &{SrcPort:38454 DestPort:9000 Seq:739627511 Ack:0 Flags:40962 WindowSize:29200 Checksum:13229 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:17 tcp packet: &{SrcPort:38454 DestPort:9000 Seq:739627512 Ack:3413311283 Flags:32784 WindowSize:229 Checksum:9020 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:17 connection established
2021/10/30 03:58:17 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 150 54 203 113 120 147 44 21 209 248 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:17 checksumer: &{sum:476758 oddByte:33 length:39}
2021/10/30 03:58:17 ret:  476791
2021/10/30 03:58:17 ret:  18046
2021/10/30 03:58:17 ret:  18046
2021/10/30 03:58:17 boom packet injected
2021/10/30 03:58:17 tcp packet: &{SrcPort:38454 DestPort:9000 Seq:739627512 Ack:3413311283 Flags:32785 WindowSize:229 Checksum:9019 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:19 tcp packet: &{SrcPort:34684 DestPort:9000 Seq:2760059865 Ack:0 Flags:40962 WindowSize:29200 Checksum:26694 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:19 tcp packet: &{SrcPort:34684 DestPort:9000 Seq:2760059866 Ack:159738760 Flags:32784 WindowSize:229 Checksum:42398 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:19 connection established
2021/10/30 03:58:19 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 135 124 9 131 228 232 164 131 43 218 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:19 checksumer: &{sum:541379 oddByte:33 length:39}
2021/10/30 03:58:19 ret:  541412
2021/10/30 03:58:19 ret:  17132
2021/10/30 03:58:19 ret:  17132
2021/10/30 03:58:19 boom packet injected
2021/10/30 03:58:19 tcp packet: &{SrcPort:34684 DestPort:9000 Seq:2760059866 Ack:159738760 Flags:32785 WindowSize:229 Checksum:42397 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:19 tcp packet: &{SrcPort:39276 DestPort:9000 Seq:1714177577 Ack:2402827918 Flags:32784 WindowSize:229 Checksum:24320 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:21 tcp packet: &{SrcPort:44771 DestPort:9000 Seq:1545188130 Ack:3623084955 Flags:32784 WindowSize:229 Checksum:59704 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:21 tcp packet: &{SrcPort:36594 DestPort:9000 Seq:44503670 Ack:0 Flags:40962 WindowSize:29200 Checksum:5183 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:21 tcp packet: &{SrcPort:36594 DestPort:9000 Seq:44503671 Ack:327586834 Flags:32784 WindowSize:229 Checksum:5947 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:21 connection established
2021/10/30 03:58:21 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 142 242 19 133 13 114 2 167 18 119 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:21 checksumer: &{sum:525378 oddByte:33 length:39}
2021/10/30 03:58:21 ret:  525411
2021/10/30 03:58:21 ret:  1131
2021/10/30 03:58:21 ret:  1131
2021/10/30 03:58:21 boom packet injected
2021/10/30 03:58:21 tcp packet: &{SrcPort:36594 DestPort:9000 Seq:44503671 Ack:327586834 Flags:32785 WindowSize:229 Checksum:5946 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:23 tcp packet: &{SrcPort:37383 DestPort:9000 Seq:829213868 Ack:2704157193 Flags:32784 WindowSize:229 Checksum:63471 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:23 tcp packet: &{SrcPort:36291 DestPort:9000 Seq:451978738 Ack:0 Flags:40962 WindowSize:29200 Checksum:25047 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:23 tcp packet: &{SrcPort:36291 DestPort:9000 Seq:451978739 Ack:719016831 Flags:32784 WindowSize:229 Checksum:34368 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:23 connection established
2021/10/30 03:58:23 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 141 195 42 217 204 223 26 240 165 243 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:23 checksumer: &{sum:613570 oddByte:33 length:39}
2021/10/30 03:58:23 ret:  613603
2021/10/30 03:58:23 ret:  23788
2021/10/30 03:58:23 ret:  23788
2021/10/30 03:58:23 boom packet injected
2021/10/30 03:58:23 tcp packet: &{SrcPort:36291 DestPort:9000 Seq:451978739 Ack:719016831 Flags:32785 WindowSize:229 Checksum:34367 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:25 tcp packet: &{SrcPort:33250 DestPort:9000 Seq:876643587 Ack:2743632607 Flags:32784 WindowSize:229 Checksum:57881 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:25 tcp packet: &{SrcPort:46754 DestPort:9000 Seq:1007876156 Ack:0 Flags:40962 WindowSize:29200 Checksum:48570 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:25 tcp packet: &{SrcPort:46754 DestPort:9000 Seq:1007876157 Ack:1662990155 Flags:32784 WindowSize:229 Checksum:48706 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:25 connection established
2021/10/30 03:58:25 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 182 162 99 29 176 171 60 18 248 61 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:25 checksumer: &{sum:440445 oddByte:33 length:39}
2021/10/30 03:58:25 ret:  440478
2021/10/30 03:58:25 ret:  47268
2021/10/30 03:58:25 ret:  47268
2021/10/30 03:58:25 boom packet injected
2021/10/30 03:58:25 tcp packet: &{SrcPort:46754 DestPort:9000 Seq:1007876157 Ack:1662990155 Flags:32785 WindowSize:229 Checksum:48705 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:27 tcp packet: &{SrcPort:38454 DestPort:9000 Seq:739627513 Ack:3413311284 Flags:32784 WindowSize:229 Checksum:54552 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:27 tcp packet: &{SrcPort:41897 DestPort:9000 Seq:1494258444 Ack:0 Flags:40962 WindowSize:29200 Checksum:4374 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:27 tcp packet: &{SrcPort:41897 DestPort:9000 Seq:1494258445 Ack:3974724881 Flags:32784 WindowSize:229 Checksum:14909 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:27 connection established
2021/10/30 03:58:27 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 163 169 236 231 246 113 89 16 147 13 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:27 checksumer: &{sum:466417 oddByte:33 length:39}
2021/10/30 03:58:27 ret:  466450
2021/10/30 03:58:27 ret:  7705
2021/10/30 03:58:27 ret:  7705
2021/10/30 03:58:27 boom packet injected
2021/10/30 03:58:27 tcp packet: &{SrcPort:41897 DestPort:9000 Seq:1494258445 Ack:3974724881 Flags:32785 WindowSize:229 Checksum:14908 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:29 tcp packet: &{SrcPort:34684 DestPort:9000 Seq:2760059867 Ack:159738761 Flags:32784 WindowSize:229 Checksum:22394 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:29 tcp packet: &{SrcPort:36316 DestPort:9000 Seq:1445763593 Ack:0 Flags:40962 WindowSize:29200 Checksum:6905 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:29 tcp packet: &{SrcPort:36316 DestPort:9000 Seq:1445763594 Ack:471554848 Flags:32784 WindowSize:229 Checksum:12047 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:29 connection established
2021/10/30 03:58:29 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 141 220 28 25 212 128 86 44 154 10 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:29 checksumer: &{sum:436717 oddByte:33 length:39}
2021/10/30 03:58:29 ret:  436750
2021/10/30 03:58:29 ret:  43540
2021/10/30 03:58:29 ret:  43540
2021/10/30 03:58:29 boom packet injected
2021/10/30 03:58:29 tcp packet: &{SrcPort:36316 DestPort:9000 Seq:1445763594 Ack:471554848 Flags:32785 WindowSize:229 Checksum:12046 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:31 tcp packet: &{SrcPort:36594 DestPort:9000 Seq:44503672 Ack:327586835 Flags:32784 WindowSize:229 Checksum:51479 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:31 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:4216866356 Ack:0 Flags:40962 WindowSize:29200 Checksum:47724 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:31 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:4216866357 Ack:1510054547 Flags:32784 WindowSize:229 Checksum:18777 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:31 connection established
2021/10/30 03:58:31 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 149 65 90 0 19 243 251 88 70 53 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:31 checksumer: &{sum:442307 oddByte:33 length:39}
2021/10/30 03:58:31 ret:  442340
2021/10/30 03:58:31 ret:  49130
2021/10/30 03:58:31 ret:  49130
2021/10/30 03:58:31 boom packet injected
2021/10/30 03:58:31 tcp packet: &{SrcPort:38209 DestPort:9000 Seq:4216866357 Ack:1510054547 Flags:32785 WindowSize:229 Checksum:18776 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:33 tcp packet: &{SrcPort:36291 DestPort:9000 Seq:451978740 Ack:719016832 Flags:32784 WindowSize:229 Checksum:14365 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:33 tcp packet: &{SrcPort:38028 DestPort:9000 Seq:1751518925 Ack:0 Flags:40962 WindowSize:29200 Checksum:32171 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:33 tcp packet: &{SrcPort:38028 DestPort:9000 Seq:1751518926 Ack:4104713046 Flags:32784 WindowSize:229 Checksum:4445 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:33 connection established
2021/10/30 03:58:33 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 148 140 244 167 108 182 104 102 14 206 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:33 checksumer: &{sum:531434 oddByte:33 length:39}
2021/10/30 03:58:33 ret:  531467
2021/10/30 03:58:33 ret:  7187
2021/10/30 03:58:33 ret:  7187
2021/10/30 03:58:33 boom packet injected
2021/10/30 03:58:33 tcp packet: &{SrcPort:38028 DestPort:9000 Seq:1751518926 Ack:4104713046 Flags:32785 WindowSize:229 Checksum:4444 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:35 tcp packet: &{SrcPort:46754 DestPort:9000 Seq:1007876158 Ack:1662990156 Flags:32784 WindowSize:229 Checksum:28704 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:35 tcp packet: &{SrcPort:40125 DestPort:9000 Seq:4108867566 Ack:0 Flags:40962 WindowSize:29200 Checksum:38917 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:35 tcp packet: &{SrcPort:40125 DestPort:9000 Seq:4108867567 Ack:3397501888 Flags:32784 WindowSize:229 Checksum:32163 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:35 connection established
2021/10/30 03:58:35 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 156 189 202 128 61 32 244 232 87 239 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:35 checksumer: &{sum:537454 oddByte:33 length:39}
2021/10/30 03:58:35 ret:  537487
2021/10/30 03:58:35 ret:  13207
2021/10/30 03:58:35 ret:  13207
2021/10/30 03:58:35 boom packet injected
2021/10/30 03:58:35 tcp packet: &{SrcPort:40125 DestPort:9000 Seq:4108867567 Ack:3397501888 Flags:32785 WindowSize:229 Checksum:32162 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:37 tcp packet: &{SrcPort:41897 DestPort:9000 Seq:1494258446 Ack:3974724882 Flags:32784 WindowSize:229 Checksum:60441 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:37 tcp packet: &{SrcPort:41458 DestPort:9000 Seq:1899558337 Ack:0 Flags:40962 WindowSize:29200 Checksum:28893 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:37 tcp packet: &{SrcPort:41458 DestPort:9000 Seq:1899558338 Ack:512531931 Flags:32784 WindowSize:229 Checksum:8326 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:37 connection established
2021/10/30 03:58:37 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 161 242 30 139 23 59 113 56 245 194 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:37 checksumer: &{sum:503996 oddByte:33 length:39}
2021/10/30 03:58:37 ret:  504029
2021/10/30 03:58:37 ret:  45284
2021/10/30 03:58:37 ret:  45284
2021/10/30 03:58:37 boom packet injected
2021/10/30 03:58:37 tcp packet: &{SrcPort:41458 DestPort:9000 Seq:1899558338 Ack:512531931 Flags:32785 WindowSize:229 Checksum:8325 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:39 tcp packet: &{SrcPort:36316 DestPort:9000 Seq:1445763595 Ack:471554849 Flags:32784 WindowSize:229 Checksum:57580 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:39 tcp packet: &{SrcPort:46358 DestPort:9000 Seq:1434908307 Ack:0 Flags:40962 WindowSize:29200 Checksum:28872 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.163
2021/10/30 03:58:39 tcp packet: &{SrcPort:46358 DestPort:9000 Seq:1434908308 Ack:3404406222 Flags:32784 WindowSize:229 Checksum:60493 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.163
2021/10/30 03:58:39 connection established
2021/10/30 03:58:39 calling checksumTCP: 10.244.4.45 10.244.3.163 [35 40 181 22 202 233 151 46 85 134 246 148 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/30 03:58:39 checksumer: &{sum:476897 oddByte:33 length:39}
2021/10/30 03:58:39 ret:  476930
2021/10/30 03:58:39 ret:  18185
2021/10/30 03:58:39 ret:  18185
2021/10/30 03:58:39 boom packet injected
2021/10/30 03:58:39 tcp packet: &{SrcPort:46358 DestPort:9000 Seq:1434908308 Ack:3404406222 Flags:32785 WindowSize:229 Checksum:60492 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.163

Oct 30 03:58:39.333: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:39.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8539" for this suite.


• [SLOW TEST:68.438 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":202,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:17.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-3480
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:17.846: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:17.877: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:19.880: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:21.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:23.879: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:25.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:27.882: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:29.882: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:31.882: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:33.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:35.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:37.881: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:39.880: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:39.885: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:45.903: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:45.903: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:45.912: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:45.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3480" for this suite.


S [SKIPPING] [28.182 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:23.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-5212
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:23.465: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:23.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:25.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:27.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:29.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:31.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:33.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:35.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:37.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:39.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:41.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:43.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:45.498: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:45.503: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:58:49.524: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:58:49.524: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:49.531: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:49.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5212" for this suite.


S [SKIPPING] [26.182 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:37.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-9523
STEP: creating service service-headless in namespace services-9523
STEP: creating replication controller service-headless in namespace services-9523
I1030 03:57:37.954102      29 runners.go:190] Created replication controller with name: service-headless, namespace: services-9523, replica count: 3
I1030 03:57:41.004949      29 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:44.005482      29 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:47.007707      29 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:50.008011      29 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-9523
STEP: creating service service-headless-toggled in namespace services-9523
STEP: creating replication controller service-headless-toggled in namespace services-9523
I1030 03:57:50.021422      29 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-9523, replica count: 3
I1030 03:57:53.072744      29 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:56.073361      29 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:59.074657      29 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 30 03:57:59.077: INFO: Creating new host exec pod
Oct 30 03:57:59.088: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:01.091: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:03.092: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:05.092: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:07.093: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:09.092: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:58:09.092: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:58:15.111: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done" in pod services-9523/verify-service-up-host-exec-pod
Oct 30 03:58:15.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done'
Oct 30 03:58:15.611: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n"
Oct 30 03:58:15.612: INFO: stdout: "service-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\n"
Oct 30 03:58:15.612: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done" in pod services-9523/verify-service-up-exec-pod-tg4nc
Oct 30 03:58:15.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-up-exec-pod-tg4nc -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done'
Oct 30 03:58:16.014: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n"
Oct 30 03:58:16.015: INFO: stdout: "service-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9523
STEP: Deleting pod verify-service-up-exec-pod-tg4nc in namespace services-9523
STEP: verifying service-headless is not up
Oct 30 03:58:16.027: INFO: Creating new host exec pod
Oct 30 03:58:16.038: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:18.042: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:20.041: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:22.043: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:58:22.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed'
Oct 30 03:58:24.329: INFO: rc: 28
Oct 30 03:58:24.329: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed" in pod services-9523/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.43.109:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9523
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct 30 03:58:24.342: INFO: Creating new host exec pod
Oct 30 03:58:24.355: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:26.359: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:28.358: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:58:28.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.50.124:80 && echo service-down-failed'
Oct 30 03:58:30.650: INFO: rc: 28
Oct 30 03:58:30.650: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.50.124:80 && echo service-down-failed" in pod services-9523/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.50.124:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.50.124:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9523
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct 30 03:58:30.665: INFO: Creating new host exec pod
Oct 30 03:58:30.676: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:32.679: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:34.681: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:36.680: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:58:36.680: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:58:44.695: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done" in pod services-9523/verify-service-up-host-exec-pod
Oct 30 03:58:44.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done'
Oct 30 03:58:45.089: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n"
Oct 30 03:58:45.089: INFO: stdout: "service-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\n"
Oct 30 03:58:45.090: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done" in pod services-9523/verify-service-up-exec-pod-7cfdb
Oct 30 03:58:45.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-up-exec-pod-7cfdb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.124:80 2>&1 || true; echo; done'
Oct 30 03:58:45.504: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.124:80\n+ echo\n"
Oct 30 03:58:45.505: INFO: stdout: "service-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-n79bw\nservice-headless-toggled-jm8ch\nservice-headless-toggled-n79bw\nservice-headless-toggled-f9j4g\nservice-headless-toggled-jm8ch\nservice-headless-toggled-f9j4g\nservice-headless-toggled-f9j4g\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9523
STEP: Deleting pod verify-service-up-exec-pod-7cfdb in namespace services-9523
STEP: verifying service-headless is still not up
Oct 30 03:58:45.519: INFO: Creating new host exec pod
Oct 30 03:58:45.533: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:47.538: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:49.536: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:58:49.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed'
Oct 30 03:58:52.472: INFO: rc: 28
Oct 30 03:58:52.472: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed" in pod services-9523/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9523 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.43.109:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.43.109:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9523
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:52.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9523" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:74.565 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":114,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:52.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8781" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":189,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:53.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 30 03:58:53.076: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:58:53.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1640" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:35.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-709
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:35.165: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:35.196: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:37.200: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:39.211: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:41.200: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:43.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:45.200: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:47.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:49.199: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:51.199: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:53.201: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:55.199: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:57.201: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:57.207: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:59:01.230: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:59:01.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:01.237: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:01.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-709" for this suite.


S [SKIPPING] [26.189 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:36.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-6822
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:36.374: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:36.410: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:38.414: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:40.414: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:42.415: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:44.413: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:46.414: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:48.413: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:50.412: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:52.416: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:54.414: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:56.415: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:58.417: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:58.422: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:59:08.459: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:59:08.459: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:08.465: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6822" for this suite.


S [SKIPPING] [32.211 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:46.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Oct 30 03:58:46.141: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:48.145: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:50.146: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:52.146: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 30 03:58:52.162: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:54.166: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:56.166: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:58.166: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 30 03:59:00.184: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:02.187: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:04.187: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:06.189: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:08.190: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:10.187: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Oct 30 03:59:10.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-1341 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Oct 30 03:59:10.504: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Oct 30 03:59:10.505: INFO: stdout: "tcp      6 3594 CLOSE_WAIT src=10.244.3.197 dst=10.10.190.208 sport=60822 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=48416 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Oct 30 03:59:10.505: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3594 CLOSE_WAIT src=10.244.3.197 dst=10.10.190.208 sport=60822 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=48416 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:10.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-1341" for this suite.


• [SLOW TEST:24.414 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":5,"skipped":636,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:53.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-8954
STEP: creating replication controller externalip-test in namespace services-8954
I1030 03:58:53.165610      29 runners.go:190] Created replication controller with name: externalip-test, namespace: services-8954, replica count: 2
I1030 03:58:56.216688      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:58:59.217139      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:02.218495      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:05.219769      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:08.221012      29 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 30 03:59:08.221: INFO: Creating new exec pod
Oct 30 03:59:17.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8954 exec execpod695h6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct 30 03:59:17.666: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Oct 30 03:59:17.666: INFO: stdout: "externalip-test-wm4n6"
Oct 30 03:59:17.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8954 exec execpod695h6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.2.232 80'
Oct 30 03:59:17.987: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.2.232 80\nConnection to 10.233.2.232 80 port [tcp/http] succeeded!\n"
Oct 30 03:59:17.987: INFO: stdout: "externalip-test-wm4n6"
Oct 30 03:59:17.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8954 exec execpod695h6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Oct 30 03:59:18.617: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Oct 30 03:59:18.617: INFO: stdout: "externalip-test-f2vsh"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:18.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8954" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:25.492 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":4,"skipped":252,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:18.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Oct 30 03:59:18.738: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Oct 30 03:59:18.742: INFO: starting watch
STEP: patching
STEP: updating
Oct 30 03:59:18.750: INFO: waiting for watch events with expected annotations
Oct 30 03:59:18.750: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Oct 30 03:59:18.750: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:18.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-7917" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":5,"skipped":279,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:18.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 30 03:59:18.850: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:18.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-4366" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:19.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Oct 30 03:59:19.476: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Oct 30 03:59:19.481: INFO: starting watch
STEP: patching
STEP: updating
Oct 30 03:59:19.493: INFO: waiting for watch events with expected annotations
Oct 30 03:59:19.493: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Oct 30 03:59:19.493: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:19.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-9154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":6,"skipped":581,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Oct 30 03:59:19.747: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:39.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-5118
STEP: creating a client pod for probing the service svc-udp
Oct 30 03:58:39.616: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:41.621: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:43.620: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:45.620: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:47.621: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 30 03:58:47.640: INFO: Pod client logs: Sat Oct 30 03:58:42 UTC 2021
Sat Oct 30 03:58:42 UTC 2021 Try: 1

Sat Oct 30 03:58:42 UTC 2021 Try: 2

Sat Oct 30 03:58:42 UTC 2021 Try: 3

Sat Oct 30 03:58:42 UTC 2021 Try: 4

Sat Oct 30 03:58:42 UTC 2021 Try: 5

Sat Oct 30 03:58:42 UTC 2021 Try: 6

Sat Oct 30 03:58:42 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 30 03:58:47.653: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:49.655: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:51.658: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:53.657: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5118 to expose endpoints map[pod-server-1:[80]]
Oct 30 03:58:53.668: INFO: successfully validated that service svc-udp in namespace conntrack-5118 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Oct 30 03:59:03.716: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:05.722: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:07.720: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:09.719: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:11.721: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:13.719: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Oct 30 03:59:13.723: INFO: Cleaning up pod-server-1 pod
Oct 30 03:59:13.729: INFO: Waiting for pod pod-server-1 to disappear
Oct 30 03:59:13.732: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5118 to expose endpoints map[pod-server-2:[80]]
Oct 30 03:59:13.739: INFO: successfully validated that service svc-udp in namespace conntrack-5118 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:23.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-5118" for this suite.


• [SLOW TEST:44.195 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":3,"skipped":312,"failed":0}
Oct 30 03:59:23.761: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:01.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-3858
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:59:01.928: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:01.965: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:03.968: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:05.969: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:07.970: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:09.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:11.969: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:13.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:15.971: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:17.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:19.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:21.969: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:59:21.973: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 30 03:59:23.976: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:59:30.013: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:59:30.013: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:30.020: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:30.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3858" for this suite.


S [SKIPPING] [28.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:59:30.032: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:08.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-6952
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:59:08.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:08.905: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:10.908: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:12.909: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:14.909: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:16.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:18.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:20.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:22.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:24.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:26.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:28.908: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:59:30.911: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:59:30.916: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:59:34.981: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:59:34.981: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:59:34.988: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6952" for this suite.


S [SKIPPING] [26.243 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 30 03:59:34.999: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:20.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1030 03:57:20.832830      28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 30 03:57:20.833: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 30 03:57:20.834: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-1320
Oct 30 03:57:20.841: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-1320
I1030 03:57:20.852967      28 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-1320, replica count: 2
I1030 03:57:23.904350      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:26.904685      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:29.905523      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 30 03:57:29.905: INFO: Creating new exec pod
Oct 30 03:57:34.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 30 03:57:35.180: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 30 03:57:35.180: INFO: stdout: "nodeport-update-service-cbc97"
Oct 30 03:57:35.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.33.194 80'
Oct 30 03:57:35.497: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.33.194 80\nConnection to 10.233.33.194 80 port [tcp/http] succeeded!\n"
Oct 30 03:57:35.497: INFO: stdout: "nodeport-update-service-nw6v5"
Oct 30 03:57:35.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:35.734: INFO: rc: 1
Oct 30 03:57:35.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:36.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:36.996: INFO: rc: 1
Oct 30 03:57:36.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:37.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:38.057: INFO: rc: 1
Oct 30 03:57:38.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:38.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:38.984: INFO: rc: 1
Oct 30 03:57:38.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:39.994: INFO: rc: 1
Oct 30 03:57:39.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:40.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:40.977: INFO: rc: 1
Oct 30 03:57:40.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:41.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:42.347: INFO: rc: 1
Oct 30 03:57:42.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:42.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:43.079: INFO: rc: 1
Oct 30 03:57:43.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:43.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:44.553: INFO: rc: 1
Oct 30 03:57:44.553: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:44.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:45.043: INFO: rc: 1
Oct 30 03:57:45.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:45.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:45.978: INFO: rc: 1
Oct 30 03:57:45.978: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:46.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:46.966: INFO: rc: 1
Oct 30 03:57:46.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:47.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:47.972: INFO: rc: 1
Oct 30 03:57:47.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:48.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:49.000: INFO: rc: 1
Oct 30 03:57:49.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:49.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:49.984: INFO: rc: 1
Oct 30 03:57:49.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:50.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:51.164: INFO: rc: 1
Oct 30 03:57:51.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:51.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:51.979: INFO: rc: 1
Oct 30 03:57:51.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:52.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:53.409: INFO: rc: 1
Oct 30 03:57:53.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:53.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:54.118: INFO: rc: 1
Oct 30 03:57:54.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:54.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:55.129: INFO: rc: 1
Oct 30 03:57:55.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:55.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:56.051: INFO: rc: 1
Oct 30 03:57:56.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:56.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:57.285: INFO: rc: 1
Oct 30 03:57:57.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31686
+ echo hostName
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:57.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:58.059: INFO: rc: 1
Oct 30 03:57:58.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:58.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:57:58.987: INFO: rc: 1
Oct 30 03:57:58.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:57:59.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:00.142: INFO: rc: 1
Oct 30 03:58:00.142: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:00.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:01.181: INFO: rc: 1
Oct 30 03:58:01.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:01.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:02.121: INFO: rc: 1
Oct 30 03:58:02.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:02.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:03.186: INFO: rc: 1
Oct 30 03:58:03.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:03.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:04.070: INFO: rc: 1
Oct 30 03:58:04.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:04.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:05.253: INFO: rc: 1
Oct 30 03:58:05.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:05.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:06.091: INFO: rc: 1
Oct 30 03:58:06.091: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:06.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:06.970: INFO: rc: 1
Oct 30 03:58:06.970: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:07.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:07.975: INFO: rc: 1
Oct 30 03:58:07.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:08.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:09.001: INFO: rc: 1
Oct 30 03:58:09.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:09.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:09.965: INFO: rc: 1
Oct 30 03:58:09.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:10.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:10.977: INFO: rc: 1
Oct 30 03:58:10.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:11.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:12.777: INFO: rc: 1
Oct 30 03:58:12.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:13.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:13.987: INFO: rc: 1
Oct 30 03:58:13.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:14.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:15.103: INFO: rc: 1
Oct 30 03:58:15.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:15.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:15.975: INFO: rc: 1
Oct 30 03:58:15.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:16.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:17.715: INFO: rc: 1
Oct 30 03:58:17.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:17.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:18.171: INFO: rc: 1
Oct 30 03:58:18.171: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:18.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:19.050: INFO: rc: 1
Oct 30 03:58:19.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:19.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:20.006: INFO: rc: 1
Oct 30 03:58:20.006: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:20.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:20.968: INFO: rc: 1
Oct 30 03:58:20.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:21.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:21.984: INFO: rc: 1
Oct 30 03:58:21.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:22.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:23.130: INFO: rc: 1
Oct 30 03:58:23.130: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:23.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:24.103: INFO: rc: 1
Oct 30 03:58:24.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:24.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:25.056: INFO: rc: 1
Oct 30 03:58:25.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:25.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:26.112: INFO: rc: 1
Oct 30 03:58:26.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:26.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:27.011: INFO: rc: 1
Oct 30 03:58:27.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:27.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:28.318: INFO: rc: 1
Oct 30 03:58:28.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:28.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:28.995: INFO: rc: 1
Oct 30 03:58:28.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:29.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:30.153: INFO: rc: 1
Oct 30 03:58:30.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:30.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:31.206: INFO: rc: 1
Oct 30 03:58:31.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:31.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:32.044: INFO: rc: 1
Oct 30 03:58:32.044: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:32.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:33.505: INFO: rc: 1
Oct 30 03:58:33.505: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:33.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:34.200: INFO: rc: 1
Oct 30 03:58:34.200: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:34.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:35.074: INFO: rc: 1
Oct 30 03:58:35.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:35.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:35.987: INFO: rc: 1
Oct 30 03:58:35.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:36.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:37.217: INFO: rc: 1
Oct 30 03:58:37.217: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:37.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:38.065: INFO: rc: 1
Oct 30 03:58:38.065: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:38.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:38.984: INFO: rc: 1
Oct 30 03:58:38.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:40.036: INFO: rc: 1
Oct 30 03:58:40.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:40.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:41.083: INFO: rc: 1
Oct 30 03:58:41.083: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:41.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:42.042: INFO: rc: 1
Oct 30 03:58:42.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:42.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:43.033: INFO: rc: 1
Oct 30 03:58:43.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:43.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:44.000: INFO: rc: 1
Oct 30 03:58:44.000: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:45.010: INFO: rc: 1
Oct 30 03:58:45.010: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:45.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:46.114: INFO: rc: 1
Oct 30 03:58:46.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 31686
+ echo hostName
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:46.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:46.965: INFO: rc: 1
Oct 30 03:58:46.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:47.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:48.151: INFO: rc: 1
Oct 30 03:58:48.151: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:48.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:48.977: INFO: rc: 1
Oct 30 03:58:48.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:49.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:49.996: INFO: rc: 1
Oct 30 03:58:49.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:50.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:51.030: INFO: rc: 1
Oct 30 03:58:51.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:51.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:53.755: INFO: rc: 1
Oct 30 03:58:53.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:54.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:55.614: INFO: rc: 1
Oct 30 03:58:55.615: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:55.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:56.575: INFO: rc: 1
Oct 30 03:58:56.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:56.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:58:59.425: INFO: rc: 1
Oct 30 03:58:59.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:58:59.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:00.182: INFO: rc: 1
Oct 30 03:59:00.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:00.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:01.222: INFO: rc: 1
Oct 30 03:59:01.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:01.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:02.329: INFO: rc: 1
Oct 30 03:59:02.329: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:02.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:03.121: INFO: rc: 1
Oct 30 03:59:03.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:03.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:04.120: INFO: rc: 1
Oct 30 03:59:04.120: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:04.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:05.024: INFO: rc: 1
Oct 30 03:59:05.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:05.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:05.982: INFO: rc: 1
Oct 30 03:59:05.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:06.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:06.999: INFO: rc: 1
Oct 30 03:59:06.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:07.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:07.960: INFO: rc: 1
Oct 30 03:59:07.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:08.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:09.035: INFO: rc: 1
Oct 30 03:59:09.035: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:09.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:10.003: INFO: rc: 1
Oct 30 03:59:10.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:10.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:11.165: INFO: rc: 1
Oct 30 03:59:11.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:11.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:12.013: INFO: rc: 1
Oct 30 03:59:12.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:12.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:13.738: INFO: rc: 1
Oct 30 03:59:13.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:14.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:15.195: INFO: rc: 1
Oct 30 03:59:15.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:15.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:16.251: INFO: rc: 1
Oct 30 03:59:16.251: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:16.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:17.093: INFO: rc: 1
Oct 30 03:59:17.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:17.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:18.100: INFO: rc: 1
Oct 30 03:59:18.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:18.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:19.330: INFO: rc: 1
Oct 30 03:59:19.330: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:19.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:20.218: INFO: rc: 1
Oct 30 03:59:20.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:20.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:21.396: INFO: rc: 1
Oct 30 03:59:21.396: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:21.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:22.155: INFO: rc: 1
Oct 30 03:59:22.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:22.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:23.055: INFO: rc: 1
Oct 30 03:59:23.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:23.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:24.184: INFO: rc: 1
Oct 30 03:59:24.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:24.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:25.014: INFO: rc: 1
Oct 30 03:59:25.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:25.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:26.002: INFO: rc: 1
Oct 30 03:59:26.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:26.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:26.974: INFO: rc: 1
Oct 30 03:59:26.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:27.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:27.971: INFO: rc: 1
Oct 30 03:59:27.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:28.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:28.979: INFO: rc: 1
Oct 30 03:59:28.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:29.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:29.999: INFO: rc: 1
Oct 30 03:59:29.999: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:30.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:31.014: INFO: rc: 1
Oct 30 03:59:31.014: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:31.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:31.980: INFO: rc: 1
Oct 30 03:59:31.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:32.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:32.976: INFO: rc: 1
Oct 30 03:59:32.976: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:33.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:33.971: INFO: rc: 1
Oct 30 03:59:33.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:34.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:34.987: INFO: rc: 1
Oct 30 03:59:34.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:35.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:36.002: INFO: rc: 1
Oct 30 03:59:36.002: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:36.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686'
Oct 30 03:59:36.262: INFO: rc: 1
Oct 30 03:59:36.262: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1320 exec execpodfr5ct -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31686:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 31686
nc: connect to 10.10.190.207 port 31686 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Oct 30 03:59:36.263: FAIL: Unexpected error:
    <*errors.errorString | 0xc004468710>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31686 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31686 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001991680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001991680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001991680, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Oct 30 03:59:36.264: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-1320".
STEP: Found 17 events.
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:20 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-nw6v5
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:20 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-cbc97
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:20 +0000 UTC - event for nodeport-update-service-cbc97: {default-scheduler } Scheduled: Successfully assigned services-1320/nodeport-update-service-cbc97 to node1
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:20 +0000 UTC - event for nodeport-update-service-nw6v5: {default-scheduler } Scheduled: Successfully assigned services-1320/nodeport-update-service-nw6v5 to node2
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:27 +0000 UTC - event for nodeport-update-service-cbc97: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:27 +0000 UTC - event for nodeport-update-service-nw6v5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 407.000942ms
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:27 +0000 UTC - event for nodeport-update-service-nw6v5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:28 +0000 UTC - event for nodeport-update-service-cbc97: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.443445385s
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:28 +0000 UTC - event for nodeport-update-service-cbc97: {kubelet node1} Created: Created container nodeport-update-service
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:28 +0000 UTC - event for nodeport-update-service-nw6v5: {kubelet node2} Created: Created container nodeport-update-service
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:28 +0000 UTC - event for nodeport-update-service-nw6v5: {kubelet node2} Started: Started container nodeport-update-service
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:29 +0000 UTC - event for execpodfr5ct: {default-scheduler } Scheduled: Successfully assigned services-1320/execpodfr5ct to node1
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:29 +0000 UTC - event for nodeport-update-service-cbc97: {kubelet node1} Started: Started container nodeport-update-service
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:31 +0000 UTC - event for execpodfr5ct: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:32 +0000 UTC - event for execpodfr5ct: {kubelet node1} Created: Created container agnhost-container
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:32 +0000 UTC - event for execpodfr5ct: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 332.876727ms
Oct 30 03:59:36.290: INFO: At 2021-10-30 03:57:32 +0000 UTC - event for execpodfr5ct: {kubelet node1} Started: Started container agnhost-container
Oct 30 03:59:36.294: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Oct 30 03:59:36.294: INFO: execpodfr5ct                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  }]
Oct 30 03:59:36.294: INFO: nodeport-update-service-cbc97  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:20 +0000 UTC  }]
Oct 30 03:59:36.294: INFO: nodeport-update-service-nw6v5  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:57:20 +0000 UTC  }]
Oct 30 03:59:36.294: INFO: 
Oct 30 03:59:36.299: INFO: 
Logging node info for node master1
Oct 30 03:59:36.302: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 150922 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:31 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:31 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:31 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:31 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:36.303: INFO: 
Logging kubelet events for node master1
Oct 30 03:59:36.305: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 03:59:36.337: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:59:36.337: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 03:59:36.337: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 03:59:36.337: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:59:36.337: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:36.337: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-scheduler ready: true, restart count 0
Oct 30 03:59:36.337: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:36.337: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:59:36.337: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 03:59:36.337: INFO: 	Container nginx ready: true, restart count 0
Oct 30 03:59:36.337: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.337: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.337: INFO: 	Container node-exporter ready: true, restart count 0
W1030 03:59:36.350164      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:36.426: INFO: 
Latency metrics for node master1
Oct 30 03:59:36.426: INFO: 
Logging node info for node master2
Oct 30 03:59:36.429: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 150851 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:36.430: INFO: 
Logging kubelet events for node master2
Oct 30 03:59:36.432: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 03:59:36.456: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:59:36.456: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 03:59:36.456: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:59:36.456: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 30 03:59:36.456: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:36.456: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 30 03:59:36.456: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:36.456: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.456: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.456: INFO: 	Container node-exporter ready: true, restart count 0
W1030 03:59:36.477679      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:36.541: INFO: 
Latency metrics for node master2
Oct 30 03:59:36.541: INFO: 
Logging node info for node master3
Oct 30 03:59:36.543: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 150845 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:27 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:36.544: INFO: 
Logging kubelet events for node master3
Oct 30 03:59:36.546: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 03:59:36.563: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.563: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 03:59:36.563: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.563: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:36.563: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 03:59:36.563: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:36.563: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:36.563: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:59:36.563: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:36.563: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:59:36.563: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:59:36.563: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:59:36.563: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 03:59:36.563: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.563: INFO: 	Container nfd-controller ready: true, restart count 0
W1030 03:59:36.577039      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:36.662: INFO: 
Latency metrics for node master3
Oct 30 03:59:36.662: INFO: 
Logging node info for node node1
Oct 30 03:59:36.665: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 150991 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:36.665: INFO: 
Logging kubelet events for node node1
Oct 30 03:59:36.668: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 03:59:36.720: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:36.720: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:59:36.720: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:36.720: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:59:36.720: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:59:36.720: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.720: INFO: pod-client started at 2021-10-30 03:58:30 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container pod-client ready: true, restart count 0
Oct 30 03:59:36.720: INFO: e2e-net-exec started at 2021-10-30 03:58:46 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container e2e-net-exec ready: true, restart count 0
Oct 30 03:59:36.720: INFO: netserver-0 started at 2021-10-30 03:59:08 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:36.720: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:59:36.720: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:59:36.720: INFO: 	Container init ready: false, restart count 0
Oct 30 03:59:36.720: INFO: 	Container install ready: false, restart count 0
Oct 30 03:59:36.720: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:59:36.720: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:59:36.720: INFO: host-test-container-pod started at 2021-10-30 03:59:30 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:36.720: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:36.720: INFO: iperf2-clients-cgvhn started at 2021-10-30 03:59:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container iperf2-client ready: true, restart count 0
Oct 30 03:59:36.720: INFO: service-proxy-disabled-wm9hg started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:59:36.720: INFO: verify-service-up-host-exec-pod started at 2021-10-30 03:59:29 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:36.720: INFO: iperf2-server-deployment-59979d877-hbmbm started at 2021-10-30 03:58:49 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container iperf2-server ready: true, restart count 0
Oct 30 03:59:36.720: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:59:36.720: INFO: nodeport-update-service-cbc97 started at 2021-10-30 03:57:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 30 03:59:36.720: INFO: netserver-0 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:36.720: INFO: up-down-2-j6pdl started at 2021-10-30 03:58:01 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:59:36.720: INFO: execpodfr5ct started at 2021-10-30 03:57:29 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:36.720: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 03:59:36.720: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:36.720: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:36.720: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:36.720: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 03:59:36.721: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 03:59:36.721: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 03:59:36.721: INFO: 	Container grafana ready: true, restart count 0
Oct 30 03:59:36.721: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 03:59:36.721: INFO: verify-service-up-host-exec-pod started at 2021-10-30 03:59:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.721: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:36.721: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.721: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:59:36.721: INFO: test-container-pod started at 2021-10-30 03:58:49 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.721: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:36.721: INFO: service-proxy-disabled-zrs6q started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:36.721: INFO: 	Container service-proxy-disabled ready: true, restart count 0
W1030 03:59:36.734362      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:37.318: INFO: 
Latency metrics for node node1
Oct 30 03:59:37.318: INFO: 
Logging node info for node node2
Oct 30 03:59:37.322: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 150934 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:32 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:32 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:37.323: INFO: 
Logging kubelet events for node node2
Oct 30 03:59:37.327: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 03:59:37.352: INFO: netserver-1 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:37.352: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:37.352: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 03:59:37.352: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:37.352: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 30 03:59:37.352: INFO: service-proxy-toggled-h792q started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:37.352: INFO: up-down-3-qk7dl started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:37.352: INFO: service-proxy-toggled-lwc7z started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:37.352: INFO: pod-server-1 started at 2021-10-30 03:58:37 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:37.352: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.352: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:59:37.353: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 03:59:37.353: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:59:37.353: INFO: 	Container init ready: false, restart count 0
Oct 30 03:59:37.353: INFO: 	Container install ready: false, restart count 0
Oct 30 03:59:37.353: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:59:37.353: INFO: verify-service-up-exec-pod-54jjr started at 2021-10-30 03:59:32 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:37.353: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:59:37.353: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:59:37.353: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:37.353: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:37.353: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 03:59:37.353: INFO: service-proxy-disabled-rnj55 started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:59:37.353: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:59:37.353: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:59:37.353: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:59:37.353: INFO: netserver-1 started at 2021-10-30 03:59:08 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:37.353: INFO: up-down-3-zclc7 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:37.353: INFO: up-down-3-5qbm2 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:37.353: INFO: test-container-pod started at 2021-10-30 03:59:30 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:37.353: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:37.353: INFO: service-proxy-toggled-r5j6m started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:37.353: INFO: iperf2-clients-fnnqz started at 2021-10-30 03:59:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container iperf2-client ready: true, restart count 0
Oct 30 03:59:37.353: INFO: up-down-2-6422j started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:59:37.353: INFO: up-down-2-j2w7w started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:59:37.353: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:59:37.353: INFO: verify-service-up-exec-pod-lwwdn started at 2021-10-30 03:59:33 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container agnhost-container ready: false, restart count 0
Oct 30 03:59:37.353: INFO: nodeport-update-service-nw6v5 started at 2021-10-30 03:57:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:37.353: INFO: 	Container nodeport-update-service ready: true, restart count 0
W1030 03:59:37.366680      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:38.022: INFO: 
Latency metrics for node node2
Oct 30 03:59:38.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1320" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [137.221 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Oct 30 03:59:36.263: Unexpected error:
      <*errors.errorString | 0xc004468710>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31686 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31686 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":0,"skipped":221,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Oct 30 03:59:38.041: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:30.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-7101
STEP: creating a client pod for probing the service svc-udp
Oct 30 03:58:30.523: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:32.526: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:34.526: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:36.528: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 30 03:58:37.274: INFO: Pod client logs: Sat Oct 30 03:58:35 UTC 2021
Sat Oct 30 03:58:35 UTC 2021 Try: 1

Sat Oct 30 03:58:35 UTC 2021 Try: 2

Sat Oct 30 03:58:35 UTC 2021 Try: 3

Sat Oct 30 03:58:35 UTC 2021 Try: 4

Sat Oct 30 03:58:35 UTC 2021 Try: 5

Sat Oct 30 03:58:35 UTC 2021 Try: 6

Sat Oct 30 03:58:35 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 30 03:58:37.286: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:39.290: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:41.290: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-7101 to expose endpoints map[pod-server-1:[80]]
Oct 30 03:58:41.299: INFO: successfully validated that service svc-udp in namespace conntrack-7101 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Oct 30 03:59:41.328: INFO: Pod client logs: Sat Oct 30 03:58:35 UTC 2021
Sat Oct 30 03:58:35 UTC 2021 Try: 1

Sat Oct 30 03:58:35 UTC 2021 Try: 2

Sat Oct 30 03:58:35 UTC 2021 Try: 3

Sat Oct 30 03:58:35 UTC 2021 Try: 4

Sat Oct 30 03:58:35 UTC 2021 Try: 5

Sat Oct 30 03:58:35 UTC 2021 Try: 6

Sat Oct 30 03:58:35 UTC 2021 Try: 7

Sat Oct 30 03:58:40 UTC 2021 Try: 8

Sat Oct 30 03:58:40 UTC 2021 Try: 9

Sat Oct 30 03:58:40 UTC 2021 Try: 10

Sat Oct 30 03:58:40 UTC 2021 Try: 11

Sat Oct 30 03:58:40 UTC 2021 Try: 12

Sat Oct 30 03:58:40 UTC 2021 Try: 13

Sat Oct 30 03:58:45 UTC 2021 Try: 14

Sat Oct 30 03:58:45 UTC 2021 Try: 15

Sat Oct 30 03:58:45 UTC 2021 Try: 16

Sat Oct 30 03:58:45 UTC 2021 Try: 17

Sat Oct 30 03:58:45 UTC 2021 Try: 18

Sat Oct 30 03:58:45 UTC 2021 Try: 19

Sat Oct 30 03:58:50 UTC 2021 Try: 20

Sat Oct 30 03:58:50 UTC 2021 Try: 21

Sat Oct 30 03:58:50 UTC 2021 Try: 22

Sat Oct 30 03:58:50 UTC 2021 Try: 23

Sat Oct 30 03:58:50 UTC 2021 Try: 24

Sat Oct 30 03:58:50 UTC 2021 Try: 25

Sat Oct 30 03:58:55 UTC 2021 Try: 26

Sat Oct 30 03:58:55 UTC 2021 Try: 27

Sat Oct 30 03:58:55 UTC 2021 Try: 28

Sat Oct 30 03:58:55 UTC 2021 Try: 29

Sat Oct 30 03:58:55 UTC 2021 Try: 30

Sat Oct 30 03:58:55 UTC 2021 Try: 31

Sat Oct 30 03:59:00 UTC 2021 Try: 32

Sat Oct 30 03:59:00 UTC 2021 Try: 33

Sat Oct 30 03:59:00 UTC 2021 Try: 34

Sat Oct 30 03:59:00 UTC 2021 Try: 35

Sat Oct 30 03:59:00 UTC 2021 Try: 36

Sat Oct 30 03:59:00 UTC 2021 Try: 37

Sat Oct 30 03:59:05 UTC 2021 Try: 38

Sat Oct 30 03:59:05 UTC 2021 Try: 39

Sat Oct 30 03:59:05 UTC 2021 Try: 40

Sat Oct 30 03:59:05 UTC 2021 Try: 41

Sat Oct 30 03:59:05 UTC 2021 Try: 42

Sat Oct 30 03:59:05 UTC 2021 Try: 43

Sat Oct 30 03:59:10 UTC 2021 Try: 44

Sat Oct 30 03:59:10 UTC 2021 Try: 45

Sat Oct 30 03:59:10 UTC 2021 Try: 46

Sat Oct 30 03:59:10 UTC 2021 Try: 47

Sat Oct 30 03:59:10 UTC 2021 Try: 48

Sat Oct 30 03:59:10 UTC 2021 Try: 49

Sat Oct 30 03:59:15 UTC 2021 Try: 50

Sat Oct 30 03:59:15 UTC 2021 Try: 51

Sat Oct 30 03:59:15 UTC 2021 Try: 52

Sat Oct 30 03:59:15 UTC 2021 Try: 53

Sat Oct 30 03:59:15 UTC 2021 Try: 54

Sat Oct 30 03:59:15 UTC 2021 Try: 55

Sat Oct 30 03:59:20 UTC 2021 Try: 56

Sat Oct 30 03:59:20 UTC 2021 Try: 57

Sat Oct 30 03:59:20 UTC 2021 Try: 58

Sat Oct 30 03:59:20 UTC 2021 Try: 59

Sat Oct 30 03:59:20 UTC 2021 Try: 60

Sat Oct 30 03:59:20 UTC 2021 Try: 61

Sat Oct 30 03:59:25 UTC 2021 Try: 62

Sat Oct 30 03:59:25 UTC 2021 Try: 63

Sat Oct 30 03:59:25 UTC 2021 Try: 64

Sat Oct 30 03:59:25 UTC 2021 Try: 65

Sat Oct 30 03:59:25 UTC 2021 Try: 66

Sat Oct 30 03:59:25 UTC 2021 Try: 67

Sat Oct 30 03:59:30 UTC 2021 Try: 68

Sat Oct 30 03:59:30 UTC 2021 Try: 69

Sat Oct 30 03:59:30 UTC 2021 Try: 70

Sat Oct 30 03:59:30 UTC 2021 Try: 71

Sat Oct 30 03:59:30 UTC 2021 Try: 72

Sat Oct 30 03:59:30 UTC 2021 Try: 73

Sat Oct 30 03:59:35 UTC 2021 Try: 74

Sat Oct 30 03:59:35 UTC 2021 Try: 75

Sat Oct 30 03:59:35 UTC 2021 Try: 76

Sat Oct 30 03:59:35 UTC 2021 Try: 77

Sat Oct 30 03:59:35 UTC 2021 Try: 78

Sat Oct 30 03:59:35 UTC 2021 Try: 79

Sat Oct 30 03:59:40 UTC 2021 Try: 80

Sat Oct 30 03:59:40 UTC 2021 Try: 81

Sat Oct 30 03:59:40 UTC 2021 Try: 82

Sat Oct 30 03:59:40 UTC 2021 Try: 83

Sat Oct 30 03:59:40 UTC 2021 Try: 84

Sat Oct 30 03:59:40 UTC 2021 Try: 85

Oct 30 03:59:41.329: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001c00780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001c00780)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001c00780, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-7101".
STEP: Found 8 events.
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:34 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:35 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 364.784423ms
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:35 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:35 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:39 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:39 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 269.712999ms
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:39 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Oct 30 03:59:41.333: INFO: At 2021-10-30 03:58:40 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Oct 30 03:59:41.336: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Oct 30 03:59:41.336: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:30 +0000 UTC  }]
Oct 30 03:59:41.336: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:37 +0000 UTC  }]
Oct 30 03:59:41.336: INFO: 
Oct 30 03:59:41.340: INFO: 
Logging node info for node master1
Oct 30 03:59:41.343: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 151083 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:41 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:41 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:41.344: INFO: 
Logging kubelet events for node master1
Oct 30 03:59:41.346: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 03:59:41.367: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:41.367: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:59:41.367: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 03:59:41.367: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 03:59:41.367: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:59:41.367: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 03:59:41.367: INFO: 	Container nginx ready: true, restart count 0
Oct 30 03:59:41.367: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.367: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:41.367: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-scheduler ready: true, restart count 0
Oct 30 03:59:41.367: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:41.367: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.367: INFO: 	Container coredns ready: true, restart count 1
W1030 03:59:41.381767      34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:41.458: INFO: 
Latency metrics for node master1
Oct 30 03:59:41.458: INFO: 
Logging node info for node master2
Oct 30 03:59:41.462: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 151011 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:41.463: INFO: 
Logging kubelet events for node master2
Oct 30 03:59:41.464: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 03:59:41.473: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 03:59:41.473: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:59:41.473: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 30 03:59:41.473: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:41.473: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 30 03:59:41.473: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:41.473: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.473: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:41.473: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.473: INFO: 	Container kube-apiserver ready: true, restart count 0
W1030 03:59:41.486204      34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:41.552: INFO: 
Latency metrics for node master2
Oct 30 03:59:41.552: INFO: 
Logging node info for node master3
Oct 30 03:59:41.554: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 151005 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:37 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:41.554: INFO: 
Logging kubelet events for node master3
Oct 30 03:59:41.556: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 03:59:41.565: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 30 03:59:41.565: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 03:59:41.565: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 03:59:41.565: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 03:59:41.565: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:41.565: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container coredns ready: true, restart count 1
Oct 30 03:59:41.565: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.565: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 03:59:41.565: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.565: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:41.565: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 03:59:41.565: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:41.565: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:41.565: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:41.565: INFO: 	Container kube-flannel ready: true, restart count 2
W1030 03:59:41.579243      34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:41.659: INFO: 
Latency metrics for node master3
Oct 30 03:59:41.659: INFO: 
Logging node info for node node1
Oct 30 03:59:41.663: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 150991 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:35 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:41.663: INFO: 
Logging kubelet events for node node1
Oct 30 03:59:41.666: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 03:59:41.682: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 03:59:41.682: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:41.682: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container grafana ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 03:59:41.682: INFO: execpodfr5ct started at 2021-10-30 03:57:29 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:41.682: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 03:59:41.682: INFO: test-container-pod started at 2021-10-30 03:58:49 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:41.682: INFO: service-proxy-disabled-zrs6q started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:59:41.682: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:41.682: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 03:59:41.682: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:41.682: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:59:41.682: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:59:41.682: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:59:41.682: INFO: 	Container init ready: false, restart count 0
Oct 30 03:59:41.682: INFO: 	Container install ready: false, restart count 0
Oct 30 03:59:41.682: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:41.682: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:59:41.682: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:59:41.683: INFO: pod-client started at 2021-10-30 03:58:30 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container pod-client ready: true, restart count 0
Oct 30 03:59:41.683: INFO: e2e-net-exec started at 2021-10-30 03:58:46 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container e2e-net-exec ready: true, restart count 0
Oct 30 03:59:41.683: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:41.683: INFO: iperf2-clients-cgvhn started at 2021-10-30 03:59:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container iperf2-client ready: true, restart count 0
Oct 30 03:59:41.683: INFO: service-proxy-disabled-wm9hg started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:59:41.683: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:59:41.683: INFO: nodeport-update-service-cbc97 started at 2021-10-30 03:57:20 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 30 03:59:41.683: INFO: netserver-0 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:41.683: INFO: iperf2-server-deployment-59979d877-hbmbm started at 2021-10-30 03:58:49 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container iperf2-server ready: true, restart count 0
Oct 30 03:59:41.683: INFO: verify-service-up-host-exec-pod started at  (0+0 container statuses recorded)
Oct 30 03:59:41.683: INFO: up-down-2-j6pdl started at 2021-10-30 03:58:01 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:41.683: INFO: 	Container up-down-2 ready: true, restart count 0
W1030 03:59:41.699671      34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:44.147: INFO: 
Latency metrics for node node1
Oct 30 03:59:44.147: INFO: 
Logging node info for node node2
Oct 30 03:59:44.151: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 151089 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:42 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:42 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 03:59:42 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 03:59:42 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 03:59:44.151: INFO: 
Logging kubelet events for node node2
Oct 30 03:59:44.154: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 03:59:44.177: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 03:59:44.177: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 03:59:44.177: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 03:59:44.177: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 30 03:59:44.177: INFO: service-proxy-toggled-h792q started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:44.177: INFO: netserver-1 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container webserver ready: true, restart count 0
Oct 30 03:59:44.177: INFO: pod-server-1 started at 2021-10-30 03:58:37 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:44.177: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 03:59:44.177: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 03:59:44.177: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container discover ready: false, restart count 0
Oct 30 03:59:44.177: INFO: 	Container init ready: false, restart count 0
Oct 30 03:59:44.177: INFO: 	Container install ready: false, restart count 0
Oct 30 03:59:44.177: INFO: up-down-3-qk7dl started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:44.177: INFO: service-proxy-toggled-lwc7z started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:44.177: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 03:59:44.177: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 03:59:44.177: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 03:59:44.177: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 03:59:44.177: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 03:59:44.177: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 03:59:44.177: INFO: service-proxy-disabled-rnj55 started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 03:59:44.177: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container collectd ready: true, restart count 0
Oct 30 03:59:44.177: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 03:59:44.177: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 03:59:44.177: INFO: up-down-3-zclc7 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:44.177: INFO: up-down-3-5qbm2 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container up-down-3 ready: true, restart count 0
Oct 30 03:59:44.177: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 03:59:44.177: INFO: service-proxy-toggled-r5j6m started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 03:59:44.177: INFO: iperf2-clients-fnnqz started at 2021-10-30 03:59:05 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container iperf2-client ready: true, restart count 0
Oct 30 03:59:44.177: INFO: verify-service-down-host-exec-pod started at 2021-10-30 03:59:38 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 30 03:59:44.177: INFO: up-down-2-6422j started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:59:44.177: INFO: up-down-2-j2w7w started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.177: INFO: 	Container up-down-2 ready: true, restart count 0
Oct 30 03:59:44.177: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 03:59:44.178: INFO: 	Container nfd-worker ready: true, restart count 0
W1030 03:59:44.192311      34 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 03:59:44.507: INFO: 
Latency metrics for node node2
Oct 30 03:59:44.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7101" for this suite.


• Failure [74.053 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Oct 30 03:59:41.329: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":492,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Oct 30 03:59:44.524: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:49.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Oct 30 03:58:49.598: INFO: deploying iperf2 server
Oct 30 03:58:49.601: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Oct 30 03:58:49.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 30 03:58:51.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:58:53.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:58:55.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:58:57.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:58:59.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:59:01.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:59:03.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63771163129, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 30 03:59:05.618: INFO: waiting for iperf2 server endpoints
Oct 30 03:59:07.624: INFO: found iperf2 server endpoints
Oct 30 03:59:07.624: INFO: waiting for client pods to be running
Oct 30 03:59:13.628: INFO: all client pods are ready: 2 pods
Oct 30 03:59:13.630: INFO: server pod phase Running
Oct 30 03:59:13.630: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:58:49 +0000 UTC Reason: Message:}
Oct 30 03:59:13.631: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:59:03 +0000 UTC Reason: Message:}
Oct 30 03:59:13.631: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:59:03 +0000 UTC Reason: Message:}
Oct 30 03:59:13.631: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-30 03:58:49 +0000 UTC Reason: Message:}
Oct 30 03:59:13.631: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-10-30 03:59:01 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://5db8b56011cbc3e3467a6e80bf04088a9c7a18090b660d18905a32b4ed42ac79 Started:0xc003e116ec}
Oct 30 03:59:13.631: INFO: found 2 matching client pods
Oct 30 03:59:13.632: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-4862 PodName:iperf2-clients-cgvhn ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:13.633: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:13.850: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 30 03:59:13.850: INFO: iperf version: 
Oct 30 03:59:13.850: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-cgvhn (node node1)
Oct 30 03:59:13.854: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-4862 PodName:iperf2-clients-cgvhn ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:13.854: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:29.323: INFO: Exec stderr: ""
Oct 30 03:59:29.323: INFO: output from exec on client pod iperf2-clients-cgvhn (node node1): 
20211030035915.304,10.244.3.199,47738,10.233.23.160,6789,3,0.0-1.0,3418750976,27350007808
20211030035916.291,10.244.3.199,47738,10.233.23.160,6789,3,1.0-2.0,3498835968,27990687744
20211030035917.299,10.244.3.199,47738,10.233.23.160,6789,3,2.0-3.0,3426484224,27411873792
20211030035918.306,10.244.3.199,47738,10.233.23.160,6789,3,3.0-4.0,3490185216,27921481728
20211030035919.293,10.244.3.199,47738,10.233.23.160,6789,3,4.0-5.0,3449028608,27592228864
20211030035920.300,10.244.3.199,47738,10.233.23.160,6789,3,5.0-6.0,3485597696,27884781568
20211030035921.287,10.244.3.199,47738,10.233.23.160,6789,3,6.0-7.0,3511549952,28092399616
20211030035922.295,10.244.3.199,47738,10.233.23.160,6789,3,7.0-8.0,3493462016,27947696128
20211030035923.302,10.244.3.199,47738,10.233.23.160,6789,3,8.0-9.0,3504472064,28035776512
20211030035924.289,10.244.3.199,47738,10.233.23.160,6789,3,9.0-10.0,3443785728,27550285824
20211030035924.289,10.244.3.199,47738,10.233.23.160,6789,3,0.0-10.0,34722152448,27777635847

Oct 30 03:59:29.327: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-4862 PodName:iperf2-clients-fnnqz ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:29.327: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:29.436: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 30 03:59:29.436: INFO: iperf version: 
Oct 30 03:59:29.436: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-fnnqz (node node2)
Oct 30 03:59:29.438: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-4862 PodName:iperf2-clients-fnnqz ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:29.438: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:44.658: INFO: Exec stderr: ""
Oct 30 03:59:44.658: INFO: output from exec on client pod iperf2-clients-fnnqz (node node2): 
20211030035930.630,10.244.4.82,49148,10.233.23.160,6789,3,0.0-1.0,120061952,960495616
20211030035931.651,10.244.4.82,49148,10.233.23.160,6789,3,1.0-2.0,105775104,846200832
20211030035932.641,10.244.4.82,49148,10.233.23.160,6789,3,2.0-3.0,119537664,956301312
20211030035933.618,10.244.4.82,49148,10.233.23.160,6789,3,3.0-4.0,107347968,858783744
20211030035934.635,10.244.4.82,49148,10.233.23.160,6789,3,4.0-5.0,118489088,947912704
20211030035935.614,10.244.4.82,49148,10.233.23.160,6789,3,5.0-6.0,104202240,833617920
20211030035936.624,10.244.4.82,49148,10.233.23.160,6789,3,6.0-7.0,117702656,941621248
20211030035937.618,10.244.4.82,49148,10.233.23.160,6789,3,7.0-8.0,117702656,941621248
20211030035938.629,10.244.4.82,49148,10.233.23.160,6789,3,8.0-9.0,118489088,947912704
20211030035939.622,10.244.4.82,49148,10.233.23.160,6789,3,9.0-10.0,116785152,934281216
20211030035939.622,10.244.4.82,49148,10.233.23.160,6789,3,0.0-10.0,1146093568,916651374

Oct 30 03:59:44.658: INFO:                                From                                 To    Bandwidth (MB/s)
Oct 30 03:59:44.658: INFO:                               node1                              node1                3311
Oct 30 03:59:44.658: INFO:                               node2                              node1                 109
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 03:59:44.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-4862" for this suite.


• [SLOW TEST:55.092 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":293,"failed":0}
Oct 30 03:59:44.669: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:57:48.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3295
STEP: creating service up-down-1 in namespace services-3295
STEP: creating replication controller up-down-1 in namespace services-3295
I1030 03:57:48.897959      38 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3295, replica count: 3
I1030 03:57:51.949333      38 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:54.950439      38 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:57:57.951127      38 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:58:00.951409      38 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3295
STEP: creating service up-down-2 in namespace services-3295
STEP: creating replication controller up-down-2 in namespace services-3295
I1030 03:58:00.965308      38 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3295, replica count: 3
I1030 03:58:04.016418      38 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:58:07.017192      38 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:58:10.018130      38 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:58:13.018708      38 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Oct 30 03:58:13.021: INFO: Creating new host exec pod
Oct 30 03:58:13.032: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:15.036: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:17.037: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:58:17.037: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:58:23.054: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.27:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-host-exec-pod
Oct 30 03:58:23.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.27:80 2>&1 || true; echo; done'
Oct 30 03:58:23.467: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n"
Oct 30 03:58:23.467: INFO: stdout: "up-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\n"
Oct 30 03:58:23.467: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.27:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-exec-pod-d6cth
Oct 30 03:58:23.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-exec-pod-d6cth -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.27:80 2>&1 || true; echo; done'
Oct 30 03:58:24.231: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.27:80\n+ echo\n"
Oct 30 03:58:24.231: INFO: stdout: "up-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-hsqsg\nup-down-1-cxgrc\nup-down-1-vrw66\nup-down-1-hsqsg\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\nup-down-1-vrw66\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3295
STEP: Deleting pod verify-service-up-exec-pod-d6cth in namespace services-3295
STEP: verifying service up-down-2 is up
Oct 30 03:58:24.246: INFO: Creating new host exec pod
Oct 30 03:58:24.263: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:26.267: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:28.268: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:30.267: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:32.269: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:34.267: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:36.267: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:38.268: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:58:38.268: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:58:46.285: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-host-exec-pod
Oct 30 03:58:46.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:58:46.751: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:58:46.752: INFO: stdout: "up-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\n"
Oct 30 03:58:46.752: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-exec-pod-hf425
Oct 30 03:58:46.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-exec-pod-hf425 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:58:47.261: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:58:47.262: INFO: stdout: "up-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3295
STEP: Deleting pod verify-service-up-exec-pod-hf425 in namespace services-3295
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3295, will wait for the garbage collector to delete the pods
Oct 30 03:58:47.333: INFO: Deleting ReplicationController up-down-1 took: 4.297307ms
Oct 30 03:58:47.435: INFO: Terminating ReplicationController up-down-1 pods took: 101.025873ms
STEP: verifying service up-down-1 is not up
Oct 30 03:58:54.445: INFO: Creating new host exec pod
Oct 30 03:58:54.460: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:56.463: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:58.462: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:58:58.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.30.27:80 && echo service-down-failed'
Oct 30 03:59:00.987: INFO: rc: 28
Oct 30 03:59:00.987: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.30.27:80 && echo service-down-failed" in pod services-3295/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.30.27:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.30.27:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3295
STEP: verifying service up-down-2 is still up
Oct 30 03:59:00.992: INFO: Creating new host exec pod
Oct 30 03:59:01.004: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:03.011: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:05.008: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:07.008: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:59:07.008: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:59:17.025: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-host-exec-pod
Oct 30 03:59:17.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:59:17.388: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:59:17.388: INFO: stdout: "up-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\n"
Oct 30 03:59:17.388: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-exec-pod-7hllr
Oct 30 03:59:17.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-exec-pod-7hllr -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:59:17.741: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:59:17.741: INFO: stdout: "up-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3295
STEP: Deleting pod verify-service-up-exec-pod-7hllr in namespace services-3295
STEP: creating service up-down-3 in namespace services-3295
STEP: creating service up-down-3 in namespace services-3295
STEP: creating replication controller up-down-3 in namespace services-3295
I1030 03:59:17.764773      38 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3295, replica count: 3
I1030 03:59:20.815787      38 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:23.816020      38 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:26.817136      38 runners.go:190] up-down-3 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:29.818379      38 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Oct 30 03:59:29.821: INFO: Creating new host exec pod
Oct 30 03:59:29.842: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:31.846: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:33.846: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:59:33.846: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:59:37.896: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-host-exec-pod
Oct 30 03:59:37.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:59:39.240: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:59:39.240: INFO: stdout: "up-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\n"
Oct 30 03:59:39.240: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-exec-pod-lwwdn
Oct 30 03:59:39.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-exec-pod-lwwdn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.33.179:80 2>&1 || true; echo; done'
Oct 30 03:59:39.736: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.33.179:80\n+ echo\n"
Oct 30 03:59:39.737: INFO: stdout: "up-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j6pdl\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-j6pdl\nup-down-2-j2w7w\nup-down-2-6422j\nup-down-2-6422j\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3295
STEP: Deleting pod verify-service-up-exec-pod-lwwdn in namespace services-3295
STEP: verifying service up-down-3 is up
Oct 30 03:59:39.750: INFO: Creating new host exec pod
Oct 30 03:59:39.761: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:41.766: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:43.764: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:45.765: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:47.766: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:49.765: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:51.765: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:53.766: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:55.766: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:57.767: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:59:57.767: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 04:00:01.788: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.13.242:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-host-exec-pod
Oct 30 04:00:01.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.13.242:80 2>&1 || true; echo; done'
Oct 30 04:00:02.315: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n"
Oct 30 04:00:02.316: INFO: stdout: "up-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\n"
Oct 30 04:00:02.316: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.13.242:80 2>&1 || true; echo; done" in pod services-3295/verify-service-up-exec-pod-tp6sw
Oct 30 04:00:02.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3295 exec verify-service-up-exec-pod-tp6sw -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.13.242:80 2>&1 || true; echo; done'
Oct 30 04:00:02.670: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.13.242:80\n+ echo\n"
Oct 30 04:00:02.671: INFO: stdout: "up-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-zclc7\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-qk7dl\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-5qbm2\nup-down-3-qk7dl\nup-down-3-zclc7\nup-down-3-zclc7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3295
STEP: Deleting pod verify-service-up-exec-pod-tp6sw in namespace services-3295
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 04:00:02.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3295" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:133.831 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":155,"failed":0}
Oct 30 04:00:02.704: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:59:10.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-2572
STEP: creating service service-proxy-disabled in namespace services-2572
STEP: creating replication controller service-proxy-disabled in namespace services-2572
I1030 03:59:10.696337      24 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-2572, replica count: 3
I1030 03:59:13.748221      24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:16.750363      24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:19.750962      24 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-2572
STEP: creating service service-proxy-toggled in namespace services-2572
STEP: creating replication controller service-proxy-toggled in namespace services-2572
I1030 03:59:19.763582      24 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-2572, replica count: 3
I1030 03:59:22.814062      24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:25.815012      24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1030 03:59:28.815210      24 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 30 03:59:28.818: INFO: Creating new host exec pod
Oct 30 03:59:28.829: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:30.833: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:32.834: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 03:59:32.834: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 03:59:36.855: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done" in pod services-2572/verify-service-up-host-exec-pod
Oct 30 03:59:36.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done'
Oct 30 03:59:38.544: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n"
Oct 30 03:59:38.545: INFO: stdout: "service-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\n"
Oct 30 03:59:38.545: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done" in pod services-2572/verify-service-up-exec-pod-54jjr
Oct 30 03:59:38.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-up-exec-pod-54jjr -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done'
Oct 30 03:59:38.925: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n"
Oct 30 03:59:38.926: INFO: stdout: "service-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2572
STEP: Deleting pod verify-service-up-exec-pod-54jjr in namespace services-2572
STEP: verifying service-disabled is not up
Oct 30 03:59:38.940: INFO: Creating new host exec pod
Oct 30 03:59:38.955: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:40.959: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:42.960: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 03:59:42.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed'
Oct 30 03:59:45.203: INFO: rc: 28
Oct 30 03:59:45.204: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed" in pod services-2572/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.24.236:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2572
STEP: adding service-proxy-name label
STEP: verifying service is not up
Oct 30 03:59:45.222: INFO: Creating new host exec pod
Oct 30 03:59:45.237: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:47.241: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:49.241: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:51.240: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:53.242: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:55.240: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:57.242: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:59:59.240: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 04:00:01.241: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 04:00:03.242: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 04:00:03.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.204:80 && echo service-down-failed'
Oct 30 04:00:05.538: INFO: rc: 28
Oct 30 04:00:05.538: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.4.204:80 && echo service-down-failed" in pod services-2572/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.4.204:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.4.204:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2572
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Oct 30 04:00:05.556: INFO: Creating new host exec pod
Oct 30 04:00:05.568: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 04:00:07.572: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 04:00:09.589: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 30 04:00:09.589: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 30 04:00:15.606: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done" in pod services-2572/verify-service-up-host-exec-pod
Oct 30 04:00:15.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done'
Oct 30 04:00:16.024: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n"
Oct 30 04:00:16.024: INFO: stdout: "service-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\n"
Oct 30 04:00:16.025: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done" in pod services-2572/verify-service-up-exec-pod-4fjgn
Oct 30 04:00:16.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-up-exec-pod-4fjgn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.4.204:80 2>&1 || true; echo; done'
Oct 30 04:00:16.342: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.4.204:80\n+ echo\n"
Oct 30 04:00:16.343: INFO: stdout: "service-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-h792q\nservice-proxy-toggled-h792q\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-h792q\nservice-proxy-toggled-r5j6m\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\nservice-proxy-toggled-lwc7z\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2572
STEP: Deleting pod verify-service-up-exec-pod-4fjgn in namespace services-2572
STEP: verifying service-disabled is still not up
Oct 30 04:00:16.355: INFO: Creating new host exec pod
Oct 30 04:00:16.370: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 30 04:00:18.373: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 30 04:00:18.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed'
Oct 30 04:00:20.636: INFO: rc: 28
Oct 30 04:00:20.636: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed" in pod services-2572/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2572 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.24.236:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.24.236:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2572
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 30 04:00:20.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2572" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:69.983 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":6,"skipped":707,"failed":0}
Oct 30 04:00:20.654: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 30 03:58:27.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-7055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 30 03:58:27.614: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 30 03:58:27.645: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:29.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:31.650: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:33.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 30 03:58:35.649: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:37.650: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:39.649: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:41.649: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:43.648: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:45.649: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:47.649: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 30 03:58:49.648: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 30 03:58:49.653: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 30 03:59:01.674: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 30 03:59:01.674: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Oct 30 03:59:01.696: INFO: Service node-port-service in namespace nettest-7055 found.
Oct 30 03:59:01.710: INFO: Service session-affinity-service in namespace nettest-7055 found.
STEP: Waiting for NodePort service to expose endpoint
Oct 30 03:59:02.712: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Oct 30 03:59:03.716: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.233.57.70:90 (config.clusterIP)
Oct 30 03:59:03.723: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.233.57.70&port=90&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:03.723: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:04.092: INFO: Waiting for responses: map[netserver-0:{}]
Oct 30 03:59:06.096: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.233.57.70&port=90&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:06.096: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:06.181: INFO: Waiting for responses: map[]
Oct 30 03:59:06.181: INFO: reached 10.233.57.70 after 1/34 tries
STEP: dialing(udp) test-container-pod --> 10.10.190.207:31775 (nodeIP)
Oct 30 03:59:06.184: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:06.184: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:06.274: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:08.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:08.279: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:08.409: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:10.413: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:10.413: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:10.554: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:12.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:12.559: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:13.298: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:15.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:15.302: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:15.529: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:17.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:17.532: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:17.906: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:19.909: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:19.909: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:20.216: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:22.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:22.222: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:22.740: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:24.744: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:24.744: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:24.886: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:26.893: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:26.893: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:26.985: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:28.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:28.988: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:29.071: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:31.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:31.075: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:31.288: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:33.293: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:33.293: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:33.397: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:35.400: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:35.401: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:35.656: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:37.660: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:37.660: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:37.763: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:39.767: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:39.767: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:39.887: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:41.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:41.891: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:42.721: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:44.724: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:44.725: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:44.811: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:46.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:46.814: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:46.896: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:48.899: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:48.899: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:48.984: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:50.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:50.991: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:51.082: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:53.088: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:53.088: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:53.178: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:55.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:55.181: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:55.272: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:57.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:57.275: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:57.358: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 03:59:59.361: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 03:59:59.361: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 03:59:59.442: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:01.445: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:01.445: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:01.529: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:03.534: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:03.534: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:03.687: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:05.691: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:05.691: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:05.983: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:07.986: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:07.986: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:08.249: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:10.252: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:10.253: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:10.357: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:12.362: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:12.363: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:12.815: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:14.819: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:14.819: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:14.929: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:16.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:16.932: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:17.061: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:19.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'] Namespace:nettest-7055 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 30 04:00:19.065: INFO: >>> kubeConfig: /root/.kube/config
Oct 30 04:00:19.172: INFO: Waiting for responses: map[netserver-0:{} netserver-1:{}]
Oct 30 04:00:21.172: INFO: 
Output of kubectl describe pod nettest-7055/netserver-0:

Oct 30 04:00:21.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-7055 describe pod netserver-0 --namespace=nettest-7055'
Oct 30 04:00:21.356: INFO: stderr: ""
Oct 30 04:00:21.356: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-7055\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Sat, 30 Oct 2021 03:58:27 +0000\nLabels:       selector-8db9e637-2638-4036-b481-1f503ab685a0=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.183\"\n                    ],\n                    \"mac\": \"d6:a8:bc:04:da:02\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.183\"\n                    ],\n                    \"mac\": \"d6:a8:bc:04:da:02\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.183\nIPs:\n  IP:  10.244.3.183\nContainers:\n  webserver:\n    Container ID:  docker://990f9d7bfcbda2a77a91bade0ecca4bd43c16ecc9ebeded8d00f925d30208d01\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 30 Oct 2021 03:58:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c8mr (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-6c8mr:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  114s  default-scheduler  Successfully assigned nettest-7055/netserver-0 to node1\n  Normal  Pulling    111s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     111s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 341.504973ms\n  Normal  Created    111s  kubelet            Created container webserver\n  Normal  Started    110s  kubelet            Started container webserver\n"
Oct 30 04:00:21.357: INFO: Name:         netserver-0
Namespace:    nettest-7055
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Sat, 30 Oct 2021 03:58:27 +0000
Labels:       selector-8db9e637-2638-4036-b481-1f503ab685a0=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.183"
                    ],
                    "mac": "d6:a8:bc:04:da:02",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.183"
                    ],
                    "mac": "d6:a8:bc:04:da:02",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.183
IPs:
  IP:  10.244.3.183
Containers:
  webserver:
    Container ID:  docker://990f9d7bfcbda2a77a91bade0ecca4bd43c16ecc9ebeded8d00f925d30208d01
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 30 Oct 2021 03:58:31 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c8mr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-6c8mr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  114s  default-scheduler  Successfully assigned nettest-7055/netserver-0 to node1
  Normal  Pulling    111s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     111s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 341.504973ms
  Normal  Created    111s  kubelet            Created container webserver
  Normal  Started    110s  kubelet            Started container webserver

Oct 30 04:00:21.357: INFO: 
Output of kubectl describe pod nettest-7055/netserver-1:

Oct 30 04:00:21.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-7055 describe pod netserver-1 --namespace=nettest-7055'
Oct 30 04:00:21.550: INFO: stderr: ""
Oct 30 04:00:21.550: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-7055\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Sat, 30 Oct 2021 03:58:27 +0000\nLabels:       selector-8db9e637-2638-4036-b481-1f503ab685a0=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.71\"\n                    ],\n                    \"mac\": \"3a:fc:25:b7:f2:53\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.71\"\n                    ],\n                    \"mac\": \"3a:fc:25:b7:f2:53\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.71\nIPs:\n  IP:  10.244.4.71\nContainers:\n  webserver:\n    Container ID:  docker://c5d40c9ee8a82e1b3a7484683d3809d60e85cada2b084067d63b9ef011699881\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Sat, 30 Oct 2021 03:58:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5txgs (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-5txgs:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  114s  default-scheduler  Successfully assigned nettest-7055/netserver-1 to node2\n  Normal  Pulling    112s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     111s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 340.974739ms\n  Normal  Created    111s  kubelet            Created container webserver\n  Normal  Started    111s  kubelet            Started container webserver\n"
Oct 30 04:00:21.551: INFO: Name:         netserver-1
Namespace:    nettest-7055
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Sat, 30 Oct 2021 03:58:27 +0000
Labels:       selector-8db9e637-2638-4036-b481-1f503ab685a0=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.71"
                    ],
                    "mac": "3a:fc:25:b7:f2:53",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.71"
                    ],
                    "mac": "3a:fc:25:b7:f2:53",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.71
IPs:
  IP:  10.244.4.71
Containers:
  webserver:
    Container ID:  docker://c5d40c9ee8a82e1b3a7484683d3809d60e85cada2b084067d63b9ef011699881
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Sat, 30 Oct 2021 03:58:30 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5txgs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-5txgs:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  114s  default-scheduler  Successfully assigned nettest-7055/netserver-1 to node2
  Normal  Pulling    112s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     111s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 340.974739ms
  Normal  Created    111s  kubelet            Created container webserver
  Normal  Started    111s  kubelet            Started container webserver

Oct 30 04:00:21.551: INFO: encountered error during dial (did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}])
Oct 30 04:00:21.551: FAIL: failed dialing endpoint, did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000183e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000183e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000183e00, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-7055".
STEP: Found 15 events.
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:27 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-7055/netserver-0 to node1
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:27 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-7055/netserver-1 to node2
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:29 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 341.504973ms
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:30 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 340.974739ms
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:31 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:49 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-7055/test-container-pod to node1
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:55 +0000 UTC - event for test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:56 +0000 UTC - event for test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 984.048668ms
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:56 +0000 UTC - event for test-container-pod: {kubelet node1} Created: Created container webserver
Oct 30 04:00:21.557: INFO: At 2021-10-30 03:58:59 +0000 UTC - event for test-container-pod: {kubelet node1} Started: Started container webserver
Oct 30 04:00:21.560: INFO: POD                 NODE   PHASE    GRACE  CONDITIONS
Oct 30 04:00:21.560: INFO: netserver-0         node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:27 +0000 UTC  }]
Oct 30 04:00:21.560: INFO: netserver-1         node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:27 +0000 UTC  }]
Oct 30 04:00:21.560: INFO: test-container-pod  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:59:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:59:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 03:58:49 +0000 UTC  }]
Oct 30 04:00:21.560: INFO: 
Oct 30 04:00:21.565: INFO: 
Logging node info for node master1
Oct 30 04:00:21.567: INFO: Node Info: &Node{ObjectMeta:{master1    b47c04d5-47a7-4a95-8e97-481e6e60af54 151528 0 2021-10-29 21:05:34 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:21 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:21 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:21 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:00:21 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 04:00:21.568: INFO: 
Logging kubelet events for node master1
Oct 30 04:00:21.570: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 30 04:00:21.580: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 30 04:00:21.580: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Init container install-cni ready: true, restart count 0
Oct 30 04:00:21.580: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 04:00:21.580: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 04:00:21.580: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 04:00:21.580: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 04:00:21.580: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container coredns ready: true, restart count 1
Oct 30 04:00:21.580: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container docker-registry ready: true, restart count 0
Oct 30 04:00:21.580: INFO: 	Container nginx ready: true, restart count 0
Oct 30 04:00:21.580: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.580: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 04:00:21.580: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.580: INFO: 	Container kube-scheduler ready: true, restart count 0
W1030 04:00:21.593479      30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 04:00:21.663: INFO: 
Latency metrics for node master1
Oct 30 04:00:21.663: INFO: 
Logging node info for node master2
Oct 30 04:00:21.665: INFO: Node Info: &Node{ObjectMeta:{master2    208792d3-d365-4ddb-83d4-10e6e818079c 151513 0 2021-10-29 21:06:06 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 04:00:21.666: INFO: 
Logging kubelet events for node master2
Oct 30 04:00:21.669: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 30 04:00:21.676: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.676: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 04:00:21.676: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.676: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.676: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 04:00:21.676: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.676: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 04:00:21.677: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.677: INFO: 	Container kube-controller-manager ready: true, restart count 3
Oct 30 04:00:21.677: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.677: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 04:00:21.677: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.677: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 30 04:00:21.677: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 04:00:21.677: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 04:00:21.677: INFO: 	Container kube-flannel ready: true, restart count 1
W1030 04:00:21.690668      30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 04:00:21.746: INFO: 
Latency metrics for node master2
Oct 30 04:00:21.746: INFO: 
Logging node info for node master3
Oct 30 04:00:21.748: INFO: Node Info: &Node{ObjectMeta:{master3    168f1589-e029-47ae-b194-10215fc22d6a 151508 0 2021-10-29 21:06:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:00:17 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 04:00:21.748: INFO: 
Logging kubelet events for node master3
Oct 30 04:00:21.750: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 30 04:00:21.758: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container autoscaler ready: true, restart count 1
Oct 30 04:00:21.758: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 30 04:00:21.758: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 30 04:00:21.758: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 30 04:00:21.758: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 04:00:21.758: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 04:00:21.758: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 04:00:21.758: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container coredns ready: true, restart count 1
Oct 30 04:00:21.758: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.758: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 30 04:00:21.758: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.758: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 04:00:21.758: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 30 04:00:21.758: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.758: INFO: 	Container kube-proxy ready: true, restart count 1
W1030 04:00:21.770700      30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 04:00:21.853: INFO: 
Latency metrics for node master3
Oct 30 04:00:21.853: INFO: 
Logging node info for node node1
Oct 30 04:00:21.856: INFO: Node Info: &Node{ObjectMeta:{node1    ddef9269-94c5-4165-81fb-a3b0c4ac5c75 151494 0 2021-10-29 21:07:27 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 01:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-30 03:08:22 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:16 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:16 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:16 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:00:16 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 04:00:21.857: INFO: 
Logging kubelet events for node node1
Oct 30 04:00:21.859: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 30 04:00:21.874: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 30 04:00:21.874: INFO: netserver-0 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container webserver ready: true, restart count 0
Oct 30 04:00:21.874: INFO: up-down-2-j6pdl started at 2021-10-30 03:58:01 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container up-down-2 ready: false, restart count 0
Oct 30 04:00:21.874: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 30 04:00:21.874: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 04:00:21.874: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container config-reloader ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container grafana ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container prometheus ready: true, restart count 1
Oct 30 04:00:21.874: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 04:00:21.874: INFO: test-container-pod started at 2021-10-30 03:58:49 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container webserver ready: true, restart count 0
Oct 30 04:00:21.874: INFO: service-proxy-disabled-zrs6q started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 04:00:21.874: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 04:00:21.874: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 30 04:00:21.874: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 04:00:21.874: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container collectd ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 04:00:21.874: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 04:00:21.874: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container discover ready: false, restart count 0
Oct 30 04:00:21.874: INFO: 	Container init ready: false, restart count 0
Oct 30 04:00:21.874: INFO: 	Container install ready: false, restart count 0
Oct 30 04:00:21.874: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 04:00:21.874: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 04:00:21.874: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 04:00:21.874: INFO: service-proxy-disabled-wm9hg started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:21.874: INFO: 	Container service-proxy-disabled ready: true, restart count 0
W1030 04:00:21.886603      30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 04:00:22.236: INFO: 
Latency metrics for node node1
Oct 30 04:00:22.236: INFO: 
Logging node info for node node2
Oct 30 04:00:22.239: INFO: Node Info: &Node{ObjectMeta:{node2    3b49ad19-ba56-4f4a-b1fa-eef102063de9 151479 0 2021-10-29 21:07:28 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-10-30 01:59:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-10-30 01:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:13 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:13 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 04:00:13 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 04:00:13 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 30 04:00:22.240: INFO: 
Logging kubelet events for node node2
Oct 30 04:00:22.243: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 30 04:00:22.264: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:22.264: INFO: 	Container nodereport ready: true, restart count 0
Oct 30 04:00:22.264: INFO: 	Container reconcile ready: true, restart count 0
Oct 30 04:00:22.264: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded)
Oct 30 04:00:22.264: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 30 04:00:22.264: INFO: 	Container node-exporter ready: true, restart count 0
Oct 30 04:00:22.264: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.264: INFO: 	Container tas-extender ready: true, restart count 0
Oct 30 04:00:22.264: INFO: service-proxy-disabled-rnj55 started at 2021-10-30 03:59:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.264: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Oct 30 04:00:22.264: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded)
Oct 30 04:00:22.264: INFO: 	Container collectd ready: true, restart count 0
Oct 30 04:00:22.264: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 30 04:00:22.264: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 30 04:00:22.264: INFO: up-down-3-zclc7 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container up-down-3 ready: false, restart count 0
Oct 30 04:00:22.265: INFO: up-down-3-5qbm2 started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container up-down-3 ready: false, restart count 0
Oct 30 04:00:22.265: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 30 04:00:22.265: INFO: service-proxy-toggled-r5j6m started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 04:00:22.265: INFO: up-down-2-6422j started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container up-down-2 ready: false, restart count 0
Oct 30 04:00:22.265: INFO: up-down-2-j2w7w started at 2021-10-30 03:58:00 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container up-down-2 ready: false, restart count 0
Oct 30 04:00:22.265: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 30 04:00:22.265: INFO: netserver-1 started at 2021-10-30 03:58:27 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container webserver ready: true, restart count 0
Oct 30 04:00:22.265: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Init container install-cni ready: true, restart count 2
Oct 30 04:00:22.265: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 30 04:00:22.265: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container kube-multus ready: true, restart count 1
Oct 30 04:00:22.265: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 30 04:00:22.265: INFO: service-proxy-toggled-h792q started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 04:00:22.265: INFO: up-down-3-qk7dl started at 2021-10-30 03:59:17 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container up-down-3 ready: false, restart count 0
Oct 30 04:00:22.265: INFO: service-proxy-toggled-lwc7z started at 2021-10-30 03:59:19 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Oct 30 04:00:22.265: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 30 04:00:22.265: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 30 04:00:22.265: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container discover ready: false, restart count 0
Oct 30 04:00:22.265: INFO: 	Container init ready: false, restart count 0
Oct 30 04:00:22.265: INFO: 	Container install ready: false, restart count 0
Oct 30 04:00:22.265: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded)
Oct 30 04:00:22.265: INFO: 	Container kube-sriovdp ready: true, restart count 0
W1030 04:00:22.278172      30 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 30 04:00:22.579: INFO: 
Latency metrics for node node2
Oct 30 04:00:22.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7055" for this suite.


• Failure [115.092 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Oct 30 04:00:21.551: failed dialing endpoint, did not find expected responses... 
    Tries 34
    Command curl -g -q -s 'http://10.244.3.193:9080/dial?request=hostname&protocol=udp&host=10.10.190.207&port=31775&tries=1'
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":2,"skipped":511,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for pod-Service: udp"]}
Oct 30 04:00:22.594: INFO: Running AfterSuite actions on all nodes
Oct 30 04:00:22.595: INFO: Running AfterSuite actions on node 1
Oct 30 04:00:22.595: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for pod-Service: udp 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 29 of 5770 Specs in 182.639 seconds
FAIL! -- 26 Passed | 3 Failed | 0 Pending | 5741 Skipped


Ginkgo ran 1 suite in 3m4.309235257s
Test Suite Failed