Running Suite: Kubernetes e2e suite =================================== Random Seed: 1650669486 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 22 23:18:08.143: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.149: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 23:18:08.177: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:18:08.252: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:18:08.252: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:18:08.252: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:18:08.252: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:18:08.252: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 23:18:08.269: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 22 23:18:08.269: INFO: e2e test version: v1.21.9 Apr 22 23:18:08.270: INFO: kube-apiserver version: v1.21.1 Apr 22 23:18:08.271: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.277: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Apr 22 23:18:08.276: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.297: INFO: Cluster IP family: ipv4 SSS ------------------------------ Apr 22 23:18:08.282: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.304: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Apr 22 23:18:08.290: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.312: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 22 23:18:08.290: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.313: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Apr 22 23:18:08.296: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.319: INFO: Cluster IP family: ipv4 Apr 22 23:18:08.296: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.320: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Apr 22 23:18:08.304: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.327: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 22 23:18:08.314: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.338: INFO: Cluster IP family: ipv4 S ------------------------------ Apr 22 23:18:08.319: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:18:08.340: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0422 23:18:08.417192 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.417: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.419: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-699 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-699 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:08.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-699" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W0422 23:18:08.584823 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.585: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.587: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Apr 22 23:18:08.589: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:08.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-5796" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test W0422 23:18:08.724067 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.724: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.726: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Apr 22 23:18:08.728: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:08.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-1836" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W0422 23:18:08.395031 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.395: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.397: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 Apr 22 23:18:08.415: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:10.420: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:12.418: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:14.419: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 22 23:18:14.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5688 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 22 23:18:14.883: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 22 23:18:14.883: INFO: stdout: "iptables" Apr 22 23:18:14.883: INFO: proxyMode: iptables Apr 22 23:18:14.890: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 22 23:18:14.892: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-5688 Apr 22 23:18:14.898: INFO: sourceip-test cluster ip: 10.233.47.128 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip Apr 22 23:18:14.916: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:16.920: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:18.924: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:20.927: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace services-5688 to expose endpoints map[echo-sourceip:[8080]] Apr 22 23:18:20.935: INFO: successfully validated that service sourceip-test in namespace services-5688 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment Apr 22 23:18:20.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Apr 22 23:18:22.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266300, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266300, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266300, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7d4ff5fc8b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 23:18:24.956: INFO: Waiting up to 2m0s to get response from 10.233.47.128:8080 Apr 22 23:18:24.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5688 exec pause-pod-7d4ff5fc8b-ndhd5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.47.128:8080/clientip' Apr 22 23:18:25.226: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.47.128:8080/clientip\n" Apr 22 23:18:25.226: INFO: stdout: "10.244.4.79:40018" STEP: Verifying the preserved source ip Apr 22 23:18:25.227: INFO: Waiting up to 2m0s to get response from 10.233.47.128:8080 Apr 22 23:18:25.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5688 exec pause-pod-7d4ff5fc8b-xmpm7 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.47.128:8080/clientip' Apr 22 23:18:25.479: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.47.128:8080/clientip\n" Apr 22 23:18:25.479: INFO: stdout: "10.244.3.91:57878" STEP: Verifying the preserved source ip Apr 22 23:18:25.479: INFO: Deleting deployment Apr 22 23:18:25.484: INFO: Cleaning up the echo server pod Apr 22 23:18:25.492: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:25.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5688" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:17.148 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 ------------------------------ {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":16,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0422 23:18:08.356379 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.356: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.359: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 STEP: Performing setup for networking test in namespace nettest-1435 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 23:18:08.469: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:08.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:10.505: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:12.508: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:14.508: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:16.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:18.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:20.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:22.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:24.510: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:26.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:28.506: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:30.509: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 23:18:30.516: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 23:18:34.543: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Apr 22 23:18:34.544: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:34.552: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:34.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1435" for this suite. S [SKIPPING] [26.234 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:34.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Apr 22 23:18:34.866: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:34.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-9089" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have correct firewall rules for e2e cluster [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W0422 23:18:08.722829 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:18:08.723: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:18:08.725: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-2053 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 23:18:08.846: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:08.877: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:10.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:12.887: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:14.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:16.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:18.881: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:20.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:22.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:24.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:26.882: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:28.881: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:30.882: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 23:18:30.887: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 23:18:36.909: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Apr 22 23:18:36.909: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:36.916: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:36.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2053" for this suite. S [SKIPPING] [28.230 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:08.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 STEP: Performing setup for networking test in namespace nettest-2189 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 22 23:18:08.904: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:08.935: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:10.938: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:12.940: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:14.939: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:16.939: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 23:18:18.938: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:20.939: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:22.942: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:24.940: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:26.939: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:28.940: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 23:18:30.940: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 23:18:30.945: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 22 23:18:36.966: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Apr 22 23:18:36.966: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 23:18:36.973: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:18:36.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2189" for this suite. S [SKIPPING] [28.209 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: udp [LinuxOnly] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:18:37.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Apr 22 23:18:37.243: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-448
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:09.719: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:09.751: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:11.756: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:13.755: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:15.757: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:17.756: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:19.761: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:21.755: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:23.757: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:25.755: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:27.757: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:29.757: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:18:29.763: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:18:31.766: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:18:37.805: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:18:37.805: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:37.811: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:37.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-448" for this suite.


S [SKIPPING] [28.224 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:09.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W0422 23:18:09.325886      22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 22 23:18:09.326: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 22 23:18:09.327: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-3505
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:09.460: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:09.491: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:11.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:13.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:15.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:17.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:19.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:21.494: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:23.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:25.496: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:27.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:29.495: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:31.495: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:18:31.501: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:18:37.522: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:18:37.522: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Apr 22 23:18:37.541: INFO: Service node-port-service in namespace nettest-3505 found.
Apr 22 23:18:37.556: INFO: Service session-affinity-service in namespace nettest-3505 found.
STEP: Waiting for NodePort service to expose endpoint
Apr 22 23:18:38.558: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Apr 22 23:18:39.562: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.233.25.235:80 (config.clusterIP)
Apr 22 23:18:39.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.84:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.233.25.235&port=80&tries=1'] Namespace:nettest-3505 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:18:39.567: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:18:39.776: INFO: Waiting for responses: map[]
Apr 22 23:18:39.777: INFO: reached 10.233.25.235 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:39.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3505" for this suite.


• [SLOW TEST:30.482 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":1,"skipped":438,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:40.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Apr 22 23:18:40.095: INFO: Creating new exec pod
Apr 22 23:18:46.115: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Apr 22 23:18:46.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2672 exec execpod-noendpointsldld8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Apr 22 23:18:47.393: INFO: rc: 1
Apr 22 23:18:47.393: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2672 exec execpod-noendpointsldld8 -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:47.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2672" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:7.337 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":588,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:34.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Apr 22 23:18:34.934: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-315ea907-d302-4446-9020-0d4231754ce8  dns-9566  615857ec-43bf-46fc-83e4-ad5c1536fd40 72136 0 2022-04-22 23:18:34 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-04-22 23:18:34 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-7g6kf,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-p8gns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-p8gns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Apr 22 23:18:38.944: INFO: testServerIP is 10.244.3.95
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Apr 22 23:18:38.953: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-9566  e83ec95b-96a0-4520-ae8f-b01bef403e9e 72258 0 2022-04-22 23:18:38 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2022-04-22 23:18:38 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nshx7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nshx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.3.95],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Apr 22 23:18:46.961: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-9566 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:18:46.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Apr 22 23:18:47.324: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-9566 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:18:47.324: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:18:47.478: INFO: Deleting pod e2e-dns-utils...
Apr 22 23:18:47.483: INFO: Deleting pod e2e-configmap-dns-server-315ea907-d302-4446-9020-0d4231754ce8...
Apr 22 23:18:47.489: INFO: Deleting configmap e2e-coredns-configmap-7g6kf...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:47.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9566" for this suite.


• [SLOW TEST:12.599 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":1,"skipped":157,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:47.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Apr 22 23:18:47.538: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Apr 22 23:18:47.541: INFO: starting watch
STEP: patching
STEP: updating
Apr 22 23:18:47.548: INFO: waiting for watch events with expected annotations
Apr 22 23:18:47.548: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Apr 22 23:18:47.548: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:47.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-388" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":634,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:47.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 22 23:18:47.737: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:47.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-933" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:08.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0422 23:18:08.805510      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 22 23:18:08.805: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 22 23:18:08.807: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-8211
STEP: creating a client pod for probing the service svc-udp
Apr 22 23:18:08.835: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:10.841: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:12.840: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:14.839: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:16.839: INFO: The status of Pod pod-client is Running (Ready = true)
Apr 22 23:18:16.848: INFO: Pod client logs: Fri Apr 22 23:18:15 UTC 2022
Fri Apr 22 23:18:15 UTC 2022 Try: 1

Fri Apr 22 23:18:15 UTC 2022 Try: 2

Fri Apr 22 23:18:15 UTC 2022 Try: 3

Fri Apr 22 23:18:15 UTC 2022 Try: 4

Fri Apr 22 23:18:15 UTC 2022 Try: 5

Fri Apr 22 23:18:15 UTC 2022 Try: 6

Fri Apr 22 23:18:15 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Apr 22 23:18:16.860: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:18.866: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:20.870: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:22.868: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:24.869: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8211 to expose endpoints map[pod-server-1:[80]]
Apr 22 23:18:24.880: INFO: successfully validated that service svc-udp in namespace conntrack-8211 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Apr 22 23:18:34.906: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:36.910: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:38.910: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Apr 22 23:18:38.912: INFO: Cleaning up pod-server-1 pod
Apr 22 23:18:38.919: INFO: Waiting for pod pod-server-1 to disappear
Apr 22 23:18:38.922: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8211 to expose endpoints map[pod-server-2:[80]]
Apr 22 23:18:38.929: INFO: successfully validated that service svc-udp in namespace conntrack-8211 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8211" for this suite.


• [SLOW TEST:40.166 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":1,"skipped":142,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:25.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-7319
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:25.662: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:25.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:27.696: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:29.696: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:31.695: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:33.697: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:35.697: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:37.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:39.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:41.695: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:43.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:45.697: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:47.695: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:18:47.700: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:18:55.736: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:18:55.736: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:55.744: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:55.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7319" for this suite.


S [SKIPPING] [30.222 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:47.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-1996
Apr 22 23:18:47.954: INFO: hairpin-test cluster ip: 10.233.27.108
STEP: creating a client/server pod
Apr 22 23:18:47.967: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:49.970: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:51.970: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:53.969: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:55.974: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:57.971: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-1996 to expose endpoints map[hairpin:[8080]]
Apr 22 23:18:57.981: INFO: successfully validated that service hairpin-test in namespace services-1996 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Apr 22 23:18:58.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1996 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Apr 22 23:18:59.248: INFO: stderr: "+ nc -v -t -w 2 hairpin-test 8080\n+ echo hostName\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Apr 22 23:18:59.248: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Apr 22 23:18:59.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1996 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.27.108 8080'
Apr 22 23:18:59.505: INFO: stderr: "+ nc -v -t -w 2 10.233.27.108 8080\nConnection to 10.233.27.108 8080 port [tcp/http-alt] succeeded!\n+ echo hostName\n"
Apr 22 23:18:59.505: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:59.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1996" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:11.584 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":2,"skipped":362,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:59.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 22 23:18:59.556: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:59.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5186" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:59.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:18:59.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4853" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":387,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:38.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-3937
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:38.330: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:38.359: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:40.363: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:42.364: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:44.364: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:46.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:48.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:50.364: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:52.364: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:54.364: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:56.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:58.363: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:00.364: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:00.369: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:02.374: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:04.375: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:08.448: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:08.448: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:08.455: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3937" for this suite.


S [SKIPPING] [30.241 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:37.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-7248
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:37.479: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:37.509: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:39.513: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:41.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:43.512: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:45.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:47.511: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:49.512: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:51.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:53.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:55.515: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:57.512: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:59.512: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:18:59.517: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:09.554: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:09.554: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:09.563: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7248" for this suite.


S [SKIPPING] [32.197 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:37.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-5740
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:37.115: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:37.147: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:39.151: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:41.154: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:43.152: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:45.152: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:47.153: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:49.153: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:51.152: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:53.151: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:55.152: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:57.151: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:59.151: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:18:59.156: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:01.161: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:03.161: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:11.184: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:11.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:11.191: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:11.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5740" for this suite.


S [SKIPPING] [34.198 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:55.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-2616
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:55.998: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:56.031: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:58.034: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:00.037: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:02.036: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:04.035: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:06.035: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:08.034: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:10.036: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:12.034: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:14.034: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:16.036: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:18.033: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:18.038: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:22.073: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:22.073: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:22.079: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:22.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2616" for this suite.


S [SKIPPING] [26.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:22.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-9259
STEP: deleting original service nodeport-reuse
Apr 22 23:19:22.169: INFO: Creating new host exec pod
Apr 22 23:19:22.182: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:24.185: INFO: The status of Pod hostexec is Running (Ready = true)
Apr 22 23:19:24.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9259 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :31730' | tail -n +2 | grep LISTEN'
Apr 22 23:19:24.440: INFO: stderr: "+ tail -n +2\n+ grep LISTEN\n+ ss -ant46 'sport = :31730'\n"
Apr 22 23:19:24.440: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 31730
STEP: deleting service nodeport-reuse in namespace services-9259
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:24.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9259" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":106,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:08.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
W0422 23:18:08.559725      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 22 23:18:08.559: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 22 23:18:08.561: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-6070
STEP: creating a client pod for probing the service svc-udp
Apr 22 23:18:08.590: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:10.594: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:12.593: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:14.594: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:16.593: INFO: The status of Pod pod-client is Running (Ready = true)
Apr 22 23:18:16.602: INFO: Pod client logs: Fri Apr 22 23:18:13 UTC 2022
Fri Apr 22 23:18:13 UTC 2022 Try: 1

Fri Apr 22 23:18:13 UTC 2022 Try: 2

Fri Apr 22 23:18:13 UTC 2022 Try: 3

Fri Apr 22 23:18:13 UTC 2022 Try: 4

Fri Apr 22 23:18:13 UTC 2022 Try: 5

Fri Apr 22 23:18:13 UTC 2022 Try: 6

Fri Apr 22 23:18:13 UTC 2022 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Apr 22 23:18:16.615: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:18.618: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:20.620: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:22.621: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:24.621: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-6070 to expose endpoints map[pod-server-1:[80]]
Apr 22 23:18:24.633: INFO: successfully validated that service svc-udp in namespace conntrack-6070 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Apr 22 23:19:24.663: INFO: Pod client logs: Fri Apr 22 23:18:13 UTC 2022
Fri Apr 22 23:18:13 UTC 2022 Try: 1

Fri Apr 22 23:18:13 UTC 2022 Try: 2

Fri Apr 22 23:18:13 UTC 2022 Try: 3

Fri Apr 22 23:18:13 UTC 2022 Try: 4

Fri Apr 22 23:18:13 UTC 2022 Try: 5

Fri Apr 22 23:18:13 UTC 2022 Try: 6

Fri Apr 22 23:18:13 UTC 2022 Try: 7

Fri Apr 22 23:18:18 UTC 2022 Try: 8

Fri Apr 22 23:18:18 UTC 2022 Try: 9

Fri Apr 22 23:18:18 UTC 2022 Try: 10

Fri Apr 22 23:18:18 UTC 2022 Try: 11

Fri Apr 22 23:18:18 UTC 2022 Try: 12

Fri Apr 22 23:18:18 UTC 2022 Try: 13

Fri Apr 22 23:18:23 UTC 2022 Try: 14

Fri Apr 22 23:18:23 UTC 2022 Try: 15

Fri Apr 22 23:18:23 UTC 2022 Try: 16

Fri Apr 22 23:18:23 UTC 2022 Try: 17

Fri Apr 22 23:18:23 UTC 2022 Try: 18

Fri Apr 22 23:18:23 UTC 2022 Try: 19

Fri Apr 22 23:18:28 UTC 2022 Try: 20

Fri Apr 22 23:18:28 UTC 2022 Try: 21

Fri Apr 22 23:18:28 UTC 2022 Try: 22

Fri Apr 22 23:18:28 UTC 2022 Try: 23

Fri Apr 22 23:18:28 UTC 2022 Try: 24

Fri Apr 22 23:18:28 UTC 2022 Try: 25

Fri Apr 22 23:18:33 UTC 2022 Try: 26

Fri Apr 22 23:18:33 UTC 2022 Try: 27

Fri Apr 22 23:18:33 UTC 2022 Try: 28

Fri Apr 22 23:18:33 UTC 2022 Try: 29

Fri Apr 22 23:18:33 UTC 2022 Try: 30

Fri Apr 22 23:18:33 UTC 2022 Try: 31

Fri Apr 22 23:18:38 UTC 2022 Try: 32

Fri Apr 22 23:18:38 UTC 2022 Try: 33

Fri Apr 22 23:18:38 UTC 2022 Try: 34

Fri Apr 22 23:18:38 UTC 2022 Try: 35

Fri Apr 22 23:18:38 UTC 2022 Try: 36

Fri Apr 22 23:18:38 UTC 2022 Try: 37

Fri Apr 22 23:18:43 UTC 2022 Try: 38

Fri Apr 22 23:18:43 UTC 2022 Try: 39

Fri Apr 22 23:18:43 UTC 2022 Try: 40

Fri Apr 22 23:18:43 UTC 2022 Try: 41

Fri Apr 22 23:18:43 UTC 2022 Try: 42

Fri Apr 22 23:18:43 UTC 2022 Try: 43

Fri Apr 22 23:18:48 UTC 2022 Try: 44

Fri Apr 22 23:18:48 UTC 2022 Try: 45

Fri Apr 22 23:18:48 UTC 2022 Try: 46

Fri Apr 22 23:18:48 UTC 2022 Try: 47

Fri Apr 22 23:18:48 UTC 2022 Try: 48

Fri Apr 22 23:18:48 UTC 2022 Try: 49

Fri Apr 22 23:18:53 UTC 2022 Try: 50

Fri Apr 22 23:18:53 UTC 2022 Try: 51

Fri Apr 22 23:18:53 UTC 2022 Try: 52

Fri Apr 22 23:18:53 UTC 2022 Try: 53

Fri Apr 22 23:18:53 UTC 2022 Try: 54

Fri Apr 22 23:18:53 UTC 2022 Try: 55

Fri Apr 22 23:18:58 UTC 2022 Try: 56

Fri Apr 22 23:18:58 UTC 2022 Try: 57

Fri Apr 22 23:18:58 UTC 2022 Try: 58

Fri Apr 22 23:18:58 UTC 2022 Try: 59

Fri Apr 22 23:18:58 UTC 2022 Try: 60

Fri Apr 22 23:18:58 UTC 2022 Try: 61

Fri Apr 22 23:19:03 UTC 2022 Try: 62

Fri Apr 22 23:19:03 UTC 2022 Try: 63

Fri Apr 22 23:19:03 UTC 2022 Try: 64

Fri Apr 22 23:19:03 UTC 2022 Try: 65

Fri Apr 22 23:19:03 UTC 2022 Try: 66

Fri Apr 22 23:19:03 UTC 2022 Try: 67

Fri Apr 22 23:19:08 UTC 2022 Try: 68

Fri Apr 22 23:19:08 UTC 2022 Try: 69

Fri Apr 22 23:19:08 UTC 2022 Try: 70

Fri Apr 22 23:19:08 UTC 2022 Try: 71

Fri Apr 22 23:19:08 UTC 2022 Try: 72

Fri Apr 22 23:19:08 UTC 2022 Try: 73

Fri Apr 22 23:19:13 UTC 2022 Try: 74

Fri Apr 22 23:19:13 UTC 2022 Try: 75

Fri Apr 22 23:19:13 UTC 2022 Try: 76

Fri Apr 22 23:19:13 UTC 2022 Try: 77

Fri Apr 22 23:19:13 UTC 2022 Try: 78

Fri Apr 22 23:19:13 UTC 2022 Try: 79

Fri Apr 22 23:19:18 UTC 2022 Try: 80

Fri Apr 22 23:19:18 UTC 2022 Try: 81

Fri Apr 22 23:19:18 UTC 2022 Try: 82

Fri Apr 22 23:19:18 UTC 2022 Try: 83

Fri Apr 22 23:19:18 UTC 2022 Try: 84

Fri Apr 22 23:19:18 UTC 2022 Try: 85

Fri Apr 22 23:19:23 UTC 2022 Try: 86

Fri Apr 22 23:19:23 UTC 2022 Try: 87

Fri Apr 22 23:19:23 UTC 2022 Try: 88

Fri Apr 22 23:19:23 UTC 2022 Try: 89

Fri Apr 22 23:19:23 UTC 2022 Try: 90

Fri Apr 22 23:19:23 UTC 2022 Try: 91

Apr 22 23:19:24.663: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a00480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001a00480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001a00480, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-6070".
STEP: Found 8 events.
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:10 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:11 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 335.53899ms
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:11 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:13 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:18 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:19 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 320.336751ms
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:19 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Apr 22 23:19:24.669: INFO: At 2022-04-22 23:18:20 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Apr 22 23:19:24.672: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Apr 22 23:19:24.672: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  }]
Apr 22 23:19:24.672: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  }]
Apr 22 23:19:24.672: INFO: 
Apr 22 23:19:24.677: INFO: 
Logging node info for node master1
Apr 22 23:19:24.679: INFO: Node Info: &Node{ObjectMeta:{master1    70710064-7222-41b1-b51e-81deaa6e7014 73445 0 2022-04-22 19:56:45 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:19:24.680: INFO: 
Logging kubelet events for node master1
Apr 22 23:19:24.683: INFO: 
Logging pods the kubelet thinks is on node master1
Apr 22 23:19:24.711: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-scheduler ready: true, restart count 0
Apr 22 23:19:24.711: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:19:24.711: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:19:24.711: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:19:24.711: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:24.711: INFO: 	Container prometheus-operator ready: true, restart count 0
Apr 22 23:19:24.711: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container docker-registry ready: true, restart count 0
Apr 22 23:19:24.711: INFO: 	Container nginx ready: true, restart count 0
Apr 22 23:19:24.711: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:24.711: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:19:24.711: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:19:24.711: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:19:24.711: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:19:24.711: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.711: INFO: 	Container autoscaler ready: true, restart count 2
Apr 22 23:19:24.803: INFO: 
Latency metrics for node master1
Apr 22 23:19:24.803: INFO: 
Logging node info for node master2
Apr 22 23:19:24.806: INFO: Node Info: &Node{ObjectMeta:{master2    4a346a45-ed0b-49d9-a2ad-b419d2c4705c 73365 0 2022-04-22 19:57:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:19:24.807: INFO: 
Logging kubelet events for node master2
Apr 22 23:19:24.809: INFO: 
Logging pods the kubelet thinks is on node master2
Apr 22 23:19:24.824: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:24.824: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:19:24.824: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:19:24.824: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:19:24.824: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:19:24.824: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container nfd-controller ready: true, restart count 0
Apr 22 23:19:24.824: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-scheduler ready: true, restart count 1
Apr 22 23:19:24.824: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:19:24.824: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:19:24.824: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:19:24.824: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.824: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:19:24.921: INFO: 
Latency metrics for node master2
Apr 22 23:19:24.921: INFO: 
Logging node info for node master3
Apr 22 23:19:24.924: INFO: Node Info: &Node{ObjectMeta:{master3    43c25e47-7b5c-4cf0-863e-39d16b72dcb3 73369 0 2022-04-22 19:57:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:19:19 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:19:24.924: INFO: 
Logging kubelet events for node master3
Apr 22 23:19:24.927: INFO: 
Logging pods the kubelet thinks is on node master3
Apr 22 23:19:24.941: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-proxy ready: true, restart count 1
Apr 22 23:19:24.941: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:19:24.941: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:19:24.941: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:19:24.941: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:19:24.941: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:24.941: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:19:24.941: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:19:24.941: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-controller-manager ready: true, restart count 3
Apr 22 23:19:24.941: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:24.941: INFO: 	Container kube-scheduler ready: true, restart count 2
Apr 22 23:19:25.032: INFO: 
Latency metrics for node master3
Apr 22 23:19:25.032: INFO: 
Logging node info for node node1
Apr 22 23:19:25.034: INFO: Node Info: &Node{ObjectMeta:{node1    e0ec3d42-4e2e-47e3-b369-98011b25b39b 73441 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-04-22 22:25:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-04-22 22:25:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:19:23 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:19:25.035: INFO: 
Logging kubelet events for node node1
Apr 22 23:19:25.038: INFO: 
Logging pods the kubelet thinks is on node node1
Apr 22 23:19:25.056: INFO: test-container-pod started at 2022-04-22 23:19:18 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.056: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Init container install-cni ready: true, restart count 2
Apr 22 23:19:25.056: INFO: 	Container kube-flannel ready: true, restart count 3
Apr 22 23:19:25.056: INFO: host-test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.056: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container tas-extender ready: true, restart count 0
Apr 22 23:19:25.056: INFO: verify-service-up-exec-pod-x862v started at 2022-04-22 23:19:18 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.056: INFO: netserver-0 started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.056: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:19:25.056: INFO: 	Container init ready: false, restart count 0
Apr 22 23:19:25.056: INFO: 	Container install ready: false, restart count 0
Apr 22 23:19:25.056: INFO: verify-service-down-host-exec-pod started at 2022-04-22 23:19:22 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.056: INFO: pod-client started at 2022-04-22 23:18:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container pod-client ready: true, restart count 0
Apr 22 23:19:25.056: INFO: netserver-0 started at 2022-04-22 23:18:56 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.056: INFO: netserver-0 started at 2022-04-22 23:18:48 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.056: INFO: verify-service-up-host-exec-pod started at 2022-04-22 23:19:12 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.056: INFO: startup-script started at 2022-04-22 23:19:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container startup-script ready: true, restart count 0
Apr 22 23:19:25.056: INFO: slow-terminating-unready-pod-fvvt8 started at 2022-04-22 23:19:24 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container slow-terminating-unready-pod ready: false, restart count 0
Apr 22 23:19:25.056: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:19:25.056: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:19:25.056: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:19:25.056: INFO: execpodgxp9r started at 2022-04-22 23:18:20 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.056: INFO: test-container-pod started at 2022-04-22 23:19:04 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.056: INFO: netserver-0 started at 2022-04-22 23:19:09 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.056: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:19:25.056: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:19:25.056: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container config-reloader ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container grafana ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container prometheus ready: true, restart count 1
Apr 22 23:19:25.056: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:19:25.056: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:19:25.056: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 22 23:19:25.056: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Apr 22 23:19:25.056: INFO: service-proxy-toggled-qffbg started at 2022-04-22 23:18:59 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Apr 22 23:19:25.056: INFO: netserver-0 started at 2022-04-22 23:19:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.056: INFO: up-down-2-lt6q4 started at 2022-04-22 23:18:17 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 22 23:19:25.056: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:19:25.056: INFO: service-proxy-disabled-bfqj7 started at 2022-04-22 23:18:50 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.056: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Apr 22 23:19:25.773: INFO: 
Latency metrics for node node1
Apr 22 23:19:25.773: INFO: 
Logging node info for node node2
Apr 22 23:19:25.776: INFO: Node Info: &Node{ObjectMeta:{node2    ef89f5d1-0c69-4be8-a041-8437402ef215 73487 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 22:25:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-22 22:42:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:25 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:25 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:19:25 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:19:25 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:19:25.777: INFO: 
Logging kubelet events for node node2
Apr 22 23:19:25.779: INFO: 
Logging pods the kubelet thinks is on node node2
Apr 22 23:19:25.797: INFO: service-proxy-toggled-9lhlz started at 2022-04-22 23:18:59 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.797: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Apr 22 23:19:25.797: INFO: netserver-1 started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.797: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.797: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.797: INFO: 	Container nginx-proxy ready: true, restart count 1
Apr 22 23:19:25.797: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.797: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:19:25.798: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:19:25.798: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:19:25.798: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:19:25.798: INFO: up-down-2-6gqgd started at 2022-04-22 23:18:17 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 22 23:19:25.798: INFO: netserver-1 started at 2022-04-22 23:18:38 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.798: INFO: service-proxy-disabled-9ncl7 started at 2022-04-22 23:18:50 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Apr 22 23:19:25.798: INFO: boom-server started at 2022-04-22 23:18:59 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container boom-server ready: true, restart count 0
Apr 22 23:19:25.798: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:19:25.798: INFO: netserver-1 started at 2022-04-22 23:19:09 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.798: INFO: host-test-container-pod started at 2022-04-22 23:19:18 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.798: INFO: hostexec started at 2022-04-22 23:19:22 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.798: INFO: up-down-2-kt7lb started at 2022-04-22 23:18:17 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container up-down-2 ready: true, restart count 0
Apr 22 23:19:25.798: INFO: netserver-1 started at 2022-04-22 23:19:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:19:25.798: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:19:25.798: INFO: 	Container init ready: false, restart count 0
Apr 22 23:19:25.798: INFO: 	Container install ready: false, restart count 0
Apr 22 23:19:25.798: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:19:25.798: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:19:25.798: INFO: service-proxy-disabled-jkkrd started at 2022-04-22 23:18:50 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Apr 22 23:19:25.798: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:19:25.798: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:19:25.798: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Apr 22 23:19:25.798: INFO: nodeport-update-service-n6svj started at 2022-04-22 23:18:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 22 23:19:25.798: INFO: pod-server-1 started at 2022-04-22 23:18:16 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:19:25.798: INFO: netserver-1 started at 2022-04-22 23:18:47 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.798: INFO: netserver-1 started at 2022-04-22 23:18:56 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.798: INFO: test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:19:25.798: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:19:25.798: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:19:25.798: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:19:25.798: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container cmk-webhook ready: true, restart count 0
Apr 22 23:19:25.798: INFO: nodeport-update-service-mcmjp started at 2022-04-22 23:18:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 22 23:19:25.798: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:19:25.798: INFO: service-proxy-toggled-skcm8 started at 2022-04-22 23:18:59 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:19:25.798: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Apr 22 23:19:26.551: INFO: 
Latency metrics for node node2
Apr 22 23:19:26.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6070" for this suite.


• Failure [78.024 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Apr 22 23:19:24.663: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":0,"skipped":53,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:09.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-6445
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:09.698: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:09.731: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:11.735: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:13.734: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:15.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:17.736: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:19.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:21.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:23.734: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:25.737: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:27.734: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:29.735: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:31.734: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:31.738: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:35.759: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:35.759: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:35.766: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:35.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6445" for this suite.


S [SKIPPING] [26.202 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:36.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Apr 22 23:19:36.517: INFO: (0) /api/v1/nodes/node1/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-3398
STEP: changing service nodeport-range-test to out-of-range NodePort 46373
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 46373
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3398" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":3,"skipped":626,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:08.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-712
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:08.744: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:08.774: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:10.780: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:12.779: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:14.781: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:16.778: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:18.778: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:20.781: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:22.779: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:24.779: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:26.777: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:28.778: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:30.780: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:30.786: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:36.808: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:36.809: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:36.815: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:36.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-712" for this suite.


S [SKIPPING] [28.188 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:11.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-1993
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:11.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:11.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:13.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:15.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:17.636: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:19.637: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:21.637: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:23.637: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:25.638: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:27.637: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:29.638: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:31.635: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:31.640: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:33.644: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:39.667: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:39.667: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:39.674: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:39.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1993" for this suite.


S [SKIPPING] [28.194 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:39.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Apr 22 23:19:39.992: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:39.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6502" for this suite.


S [SKIPPING] [0.037 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:24.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-338e7824-4cde-4046-b80b-316524f2409b]
STEP: Verifying pods for RC slow-terminating-unready-pod
Apr 22 23:19:24.960: INFO: Pod name slow-terminating-unready-pod: Found 0 pods out of 1
Apr 22 23:19:29.966: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Apr 22 23:19:29.975: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-fvvt8]: "NOW: 2022-04-22 23:19:29.974312549 +0000 UTC m=+1.372356584", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-8281.svc.cluster.local
Apr 22 23:19:29.975: INFO: Creating new exec pod
Apr 22 23:19:33.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8281 exec execpod-zr727 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/'
Apr 22 23:19:34.240: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/\n"
Apr 22 23:19:34.240: INFO: stdout: "NOW: 2022-04-22 23:19:34.231453734 +0000 UTC m=+5.629497822"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-8281 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Apr 22 23:19:39.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8281 exec execpod-zr727 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/; test "$?" -ne "0"'
Apr 22 23:19:40.608: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Apr 22 23:19:40.609: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Apr 22 23:19:40.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8281 exec execpod-zr727 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/'
Apr 22 23:19:41.066: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-8281.svc.cluster.local:80/\n"
Apr 22 23:19:41.066: INFO: stdout: "NOW: 2022-04-22 23:19:40.894314927 +0000 UTC m=+12.292358961"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-8281
STEP: deleting service tolerate-unready in namespace services-8281
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:41.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8281" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:16.175 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":3,"skipped":339,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:41.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Apr 22 23:19:41.400: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:41.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-4153" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:36.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Apr 22 23:19:46.967: INFO: Waiting up to 2m0s to get response from 10.244.4.110:8080
Apr 22 23:19:46.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-test6fwtr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip'
Apr 22 23:19:47.227: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip\n"
Apr 22 23:19:47.227: INFO: stdout: "10.244.0.9:48306"
STEP: Verifying the preserved source ip
Apr 22 23:19:47.227: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Apr 22 23:19:47.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-test6fwtr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Apr 22 23:19:47.484: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Apr 22 23:19:47.484: INFO: stdout: "10.244.0.9:42250"
STEP: Verifying the preserved source ip
Apr 22 23:19:47.484: INFO: Waiting up to 2m0s to get response from 10.244.3.120:8080
Apr 22 23:19:47.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-test6fwtr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip'
Apr 22 23:19:47.740: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip\n"
Apr 22 23:19:47.740: INFO: stdout: "10.244.0.9:45522"
STEP: Verifying the preserved source ip
Apr 22 23:19:47.740: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Apr 22 23:19:47.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-test6fwtr -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Apr 22 23:19:47.970: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Apr 22 23:19:47.970: INFO: stdout: "10.244.0.9:57138"
STEP: Verifying the preserved source ip
Apr 22 23:19:47.970: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 22 23:19:47.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testf4pcx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 22 23:19:48.282: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 22 23:19:48.282: INFO: stdout: "10.244.4.110:43310"
STEP: Verifying the preserved source ip
Apr 22 23:19:48.282: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Apr 22 23:19:48.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testf4pcx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Apr 22 23:19:49.026: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Apr 22 23:19:49.026: INFO: stdout: "10.244.4.110:44800"
STEP: Verifying the preserved source ip
Apr 22 23:19:49.026: INFO: Waiting up to 2m0s to get response from 10.244.3.120:8080
Apr 22 23:19:49.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testf4pcx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip'
Apr 22 23:19:49.339: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip\n"
Apr 22 23:19:49.339: INFO: stdout: "10.244.4.110:39134"
STEP: Verifying the preserved source ip
Apr 22 23:19:49.339: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Apr 22 23:19:49.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testf4pcx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Apr 22 23:19:49.745: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Apr 22 23:19:49.745: INFO: stdout: "10.244.4.110:48720"
STEP: Verifying the preserved source ip
Apr 22 23:19:49.745: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 22 23:19:49.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testggfn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 22 23:19:50.003: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 22 23:19:50.003: INFO: stdout: "10.244.1.6:49198"
STEP: Verifying the preserved source ip
Apr 22 23:19:50.003: INFO: Waiting up to 2m0s to get response from 10.244.4.110:8080
Apr 22 23:19:50.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testggfn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip'
Apr 22 23:19:50.221: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip\n"
Apr 22 23:19:50.221: INFO: stdout: "10.244.1.6:52270"
STEP: Verifying the preserved source ip
Apr 22 23:19:50.221: INFO: Waiting up to 2m0s to get response from 10.244.3.120:8080
Apr 22 23:19:50.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testggfn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip'
Apr 22 23:19:50.461: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip\n"
Apr 22 23:19:50.461: INFO: stdout: "10.244.1.6:55076"
STEP: Verifying the preserved source ip
Apr 22 23:19:50.461: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Apr 22 23:19:50.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testggfn4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Apr 22 23:19:50.690: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Apr 22 23:19:50.690: INFO: stdout: "10.244.1.6:53604"
STEP: Verifying the preserved source ip
Apr 22 23:19:50.690: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 22 23:19:50.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testgz2pq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 22 23:19:51.034: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 22 23:19:51.034: INFO: stdout: "10.244.3.120:59068"
STEP: Verifying the preserved source ip
Apr 22 23:19:51.034: INFO: Waiting up to 2m0s to get response from 10.244.4.110:8080
Apr 22 23:19:51.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testgz2pq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip'
Apr 22 23:19:51.300: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip\n"
Apr 22 23:19:51.300: INFO: stdout: "10.244.3.120:52454"
STEP: Verifying the preserved source ip
Apr 22 23:19:51.300: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Apr 22 23:19:51.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testgz2pq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Apr 22 23:19:51.581: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Apr 22 23:19:51.581: INFO: stdout: "10.244.3.120:49010"
STEP: Verifying the preserved source ip
Apr 22 23:19:51.581: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Apr 22 23:19:51.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testgz2pq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Apr 22 23:19:51.843: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Apr 22 23:19:51.843: INFO: stdout: "10.244.3.120:36016"
STEP: Verifying the preserved source ip
Apr 22 23:19:51.843: INFO: Waiting up to 2m0s to get response from 10.244.0.9:8080
Apr 22 23:19:51.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testhzzv6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip'
Apr 22 23:19:52.104: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.9:8080/clientip\n"
Apr 22 23:19:52.104: INFO: stdout: "10.244.2.5:57198"
STEP: Verifying the preserved source ip
Apr 22 23:19:52.104: INFO: Waiting up to 2m0s to get response from 10.244.4.110:8080
Apr 22 23:19:52.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testhzzv6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip'
Apr 22 23:19:52.348: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.110:8080/clientip\n"
Apr 22 23:19:52.348: INFO: stdout: "10.244.2.5:46842"
STEP: Verifying the preserved source ip
Apr 22 23:19:52.348: INFO: Waiting up to 2m0s to get response from 10.244.1.6:8080
Apr 22 23:19:52.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testhzzv6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip'
Apr 22 23:19:52.578: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.6:8080/clientip\n"
Apr 22 23:19:52.578: INFO: stdout: "10.244.2.5:34530"
STEP: Verifying the preserved source ip
Apr 22 23:19:52.578: INFO: Waiting up to 2m0s to get response from 10.244.3.120:8080
Apr 22 23:19:52.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4003 exec no-snat-testhzzv6 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip'
Apr 22 23:19:52.830: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.120:8080/clientip\n"
Apr 22 23:19:52.830: INFO: stdout: "10.244.2.5:48212"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-4003" for this suite.


• [SLOW TEST:15.955 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":4,"skipped":680,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:50.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-7431
STEP: creating service service-proxy-disabled in namespace services-7431
STEP: creating replication controller service-proxy-disabled in namespace services-7431
I0422 23:18:50.252843      33 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-7431, replica count: 3
I0422 23:18:53.304516      33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:56.305272      33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:59.306170      33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-7431
STEP: creating service service-proxy-toggled in namespace services-7431
STEP: creating replication controller service-proxy-toggled in namespace services-7431
I0422 23:18:59.318361      33 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-7431, replica count: 3
I0422 23:19:02.369501      33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:05.370096      33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:08.370445      33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:11.371644      33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Apr 22 23:19:11.374: INFO: Creating new host exec pod
Apr 22 23:19:11.386: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:13.389: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:15.390: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:19:15.390: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:19:21.406: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done" in pod services-7431/verify-service-up-host-exec-pod
Apr 22 23:19:21.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done'
Apr 22 23:19:21.806: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n"
Apr 22 23:19:21.806: INFO: stdout: "service-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\n"
Apr 22 23:19:21.807: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done" in pod services-7431/verify-service-up-exec-pod-mc49j
Apr 22 23:19:21.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-up-exec-pod-mc49j -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done'
Apr 22 23:19:22.173: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n"
Apr 22 23:19:22.173: INFO: stdout: "service-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7431
STEP: Deleting pod verify-service-up-exec-pod-mc49j in namespace services-7431
STEP: verifying service-disabled is not up
Apr 22 23:19:22.187: INFO: Creating new host exec pod
Apr 22 23:19:22.201: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:24.205: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:26.206: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:19:26.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed'
Apr 22 23:19:28.687: INFO: rc: 28
Apr 22 23:19:28.687: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed" in pod services-7431/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.19.135:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7431
STEP: adding service-proxy-name label
STEP: verifying service is not up
Apr 22 23:19:28.702: INFO: Creating new host exec pod
Apr 22 23:19:28.718: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:30.722: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:32.722: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:19:32.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.166:80 && echo service-down-failed'
Apr 22 23:19:35.010: INFO: rc: 28
Apr 22 23:19:35.010: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.28.166:80 && echo service-down-failed" in pod services-7431/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.28.166:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.28.166:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7431
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Apr 22 23:19:35.022: INFO: Creating new host exec pod
Apr 22 23:19:35.034: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:37.039: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:39.037: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:19:39.037: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:19:47.054: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done" in pod services-7431/verify-service-up-host-exec-pod
Apr 22 23:19:47.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done'
Apr 22 23:19:47.553: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n"
Apr 22 23:19:47.553: INFO: stdout: "service-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\n"
Apr 22 23:19:47.553: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done" in pod services-7431/verify-service-up-exec-pod-2kvq4
Apr 22 23:19:47.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-up-exec-pod-2kvq4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.28.166:80 2>&1 || true; echo; done'
Apr 22 23:19:48.018: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.28.166:80\n+ echo\n"
Apr 22 23:19:48.018: INFO: stdout: "service-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\nservice-proxy-toggled-skcm8\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-9lhlz\nservice-proxy-toggled-qffbg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7431
STEP: Deleting pod verify-service-up-exec-pod-2kvq4 in namespace services-7431
STEP: verifying service-disabled is still not up
Apr 22 23:19:48.030: INFO: Creating new host exec pod
Apr 22 23:19:48.042: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:50.045: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:52.045: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:54.045: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:19:54.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed'
Apr 22 23:19:56.384: INFO: rc: 28
Apr 22 23:19:56.384: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed" in pod services-7431/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7431 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.19.135:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.19.135:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7431
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:56.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7431" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:66.176 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:36.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Apr 22 23:19:36.947: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:38.950: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:40.952: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:42.951: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:44.954: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Apr 22 23:19:44.968: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:46.972: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:48.972: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:50.974: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Apr 22 23:19:52.993: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:54.998: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:56.998: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Apr 22 23:19:57.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-6055 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Apr 22 23:19:57.351: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 7 flow entries have been shown.\n"
Apr 22 23:19:57.351: INFO: stdout: "tcp      6 3598 CLOSE_WAIT src=10.244.3.121 dst=10.10.190.208 sport=40440 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=36663 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Apr 22 23:19:57.351: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3598 CLOSE_WAIT src=10.244.3.121 dst=10.10.190.208 sport=40440 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=36663 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:19:57.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-6055" for this suite.


• [SLOW TEST:20.460 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":2,"skipped":887,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:08.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W0422 23:18:08.798906      36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Apr 22 23:18:08.799: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Apr 22 23:18:08.800: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-7968
STEP: creating service up-down-1 in namespace services-7968
STEP: creating replication controller up-down-1 in namespace services-7968
I0422 23:18:08.812211      36 runners.go:190] Created replication controller with name: up-down-1, namespace: services-7968, replica count: 3
I0422 23:18:11.864077      36 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:14.865348      36 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:17.866499      36 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-7968
STEP: creating service up-down-2 in namespace services-7968
STEP: creating replication controller up-down-2 in namespace services-7968
I0422 23:18:17.881734      36 runners.go:190] Created replication controller with name: up-down-2, namespace: services-7968, replica count: 3
I0422 23:18:20.933221      36 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:23.934169      36 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:26.934546      36 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Apr 22 23:18:26.937: INFO: Creating new host exec pod
Apr 22 23:18:26.950: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:28.953: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:30.956: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:18:30.956: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:18:34.973: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.21.117:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-host-exec-pod
Apr 22 23:18:34.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.21.117:80 2>&1 || true; echo; done'
Apr 22 23:18:35.361: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n"
Apr 22 23:18:35.361: INFO: stdout: "up-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\n"
Apr 22 23:18:35.361: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.21.117:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-exec-pod-xqljs
Apr 22 23:18:35.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-exec-pod-xqljs -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.21.117:80 2>&1 || true; echo; done'
Apr 22 23:18:35.822: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.21.117:80\n+ echo\n"
Apr 22 23:18:35.823: INFO: stdout: "up-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-7fxp9\nup-down-1-7fxp9\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-nqrv5\nup-down-1-7fxp9\nup-down-1-dm9dv\nup-down-1-nqrv5\nup-down-1-dm9dv\nup-down-1-nqrv5\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7968
STEP: Deleting pod verify-service-up-exec-pod-xqljs in namespace services-7968
STEP: verifying service up-down-2 is up
Apr 22 23:18:35.838: INFO: Creating new host exec pod
Apr 22 23:18:35.851: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:37.855: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:39.855: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:18:39.855: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:18:53.876: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-host-exec-pod
Apr 22 23:18:53.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:18:54.279: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:18:54.280: INFO: stdout: "up-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\n"
Apr 22 23:18:54.280: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-exec-pod-gts6n
Apr 22 23:18:54.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-exec-pod-gts6n -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:18:54.719: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:18:54.720: INFO: stdout: "up-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7968
STEP: Deleting pod verify-service-up-exec-pod-gts6n in namespace services-7968
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-7968, will wait for the garbage collector to delete the pods
Apr 22 23:18:54.792: INFO: Deleting ReplicationController up-down-1 took: 3.932399ms
Apr 22 23:18:54.893: INFO: Terminating ReplicationController up-down-1 pods took: 101.206764ms
STEP: verifying service up-down-1 is not up
Apr 22 23:19:07.902: INFO: Creating new host exec pod
Apr 22 23:19:07.914: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:09.918: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:19:09.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.21.117:80 && echo service-down-failed'
Apr 22 23:19:12.273: INFO: rc: 28
Apr 22 23:19:12.273: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.21.117:80 && echo service-down-failed" in pod services-7968/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.21.117:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.21.117:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7968
STEP: verifying service up-down-2 is still up
Apr 22 23:19:12.280: INFO: Creating new host exec pod
Apr 22 23:19:12.295: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:14.300: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:16.298: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:18.299: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:19:18.299: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:19:24.315: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-host-exec-pod
Apr 22 23:19:24.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:19:24.687: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:19:24.687: INFO: stdout: "up-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\n"
Apr 22 23:19:24.688: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-exec-pod-x862v
Apr 22 23:19:24.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-exec-pod-x862v -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:19:25.079: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:19:25.080: INFO: stdout: "up-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7968
STEP: Deleting pod verify-service-up-exec-pod-x862v in namespace services-7968
STEP: creating service up-down-3 in namespace services-7968
STEP: creating service up-down-3 in namespace services-7968
STEP: creating replication controller up-down-3 in namespace services-7968
I0422 23:19:25.101268      36 runners.go:190] Created replication controller with name: up-down-3, namespace: services-7968, replica count: 3
I0422 23:19:28.152268      36 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:31.152705      36 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:34.154241      36 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Apr 22 23:19:34.156: INFO: Creating new host exec pod
Apr 22 23:19:34.171: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:36.175: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:38.174: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:40.175: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:19:40.175: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:19:48.190: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-host-exec-pod
Apr 22 23:19:48.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:19:48.711: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:19:48.711: INFO: stdout: "up-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\n"
Apr 22 23:19:48.711: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-exec-pod-sfjnf
Apr 22 23:19:48.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-exec-pod-sfjnf -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.59.154:80 2>&1 || true; echo; done'
Apr 22 23:19:49.145: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.59.154:80\n+ echo\n"
Apr 22 23:19:49.145: INFO: stdout: "up-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-lt6q4\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-6gqgd\nup-down-2-kt7lb\nup-down-2-6gqgd\nup-down-2-lt6q4\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7968
STEP: Deleting pod verify-service-up-exec-pod-sfjnf in namespace services-7968
STEP: verifying service up-down-3 is up
Apr 22 23:19:49.158: INFO: Creating new host exec pod
Apr 22 23:19:49.174: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:51.180: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:53.177: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:55.180: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:57.178: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:59.178: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:01.179: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:03.177: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:05.179: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:20:05.179: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:20:09.202: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.136:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-host-exec-pod
Apr 22 23:20:09.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.136:80 2>&1 || true; echo; done'
Apr 22 23:20:09.577: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n"
Apr 22 23:20:09.578: INFO: stdout: "up-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\n"
Apr 22 23:20:09.578: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.136:80 2>&1 || true; echo; done" in pod services-7968/verify-service-up-exec-pod-ddf8c
Apr 22 23:20:09.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7968 exec verify-service-up-exec-pod-ddf8c -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.50.136:80 2>&1 || true; echo; done'
Apr 22 23:20:09.991: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.50.136:80\n+ echo\n"
Apr 22 23:20:09.992: INFO: stdout: "up-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-pg45m\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-nxxpb\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-lt5qv\nup-down-3-pg45m\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7968
STEP: Deleting pod verify-service-up-exec-pod-ddf8c in namespace services-7968
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:10.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7968" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:121.238 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":133,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:52.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-2298
STEP: creating replication controller externalip-test in namespace services-2298
I0422 23:19:52.879871      34 runners.go:190] Created replication controller with name: externalip-test, namespace: services-2298, replica count: 2
I0422 23:19:55.931701      34 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:58.932284      34 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:20:01.933211      34 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 22 23:20:01.933: INFO: Creating new exec pod
Apr 22 23:20:08.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2298 exec execpodg9lcx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Apr 22 23:20:09.241: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Apr 22 23:20:09.241: INFO: stdout: "externalip-test-nfvhx"
Apr 22 23:20:09.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2298 exec execpodg9lcx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.55.195 80'
Apr 22 23:20:09.514: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.55.195 80\nConnection to 10.233.55.195 80 port [tcp/http] succeeded!\n"
Apr 22 23:20:09.514: INFO: stdout: "externalip-test-4dhcg"
Apr 22 23:20:09.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2298 exec execpodg9lcx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Apr 22 23:20:10.124: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Apr 22 23:20:10.124: INFO: stdout: "externalip-test-4dhcg"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:10.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2298" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:17.285 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":5,"skipped":682,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:20:10.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Apr 22 23:20:10.386: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Apr 22 23:20:10.491: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-8678" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.145 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:20:10.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Apr 22 23:20:10.796: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Apr 22 23:20:10.798: INFO: starting watch
STEP: patching
STEP: updating
Apr 22 23:20:10.805: INFO: waiting for watch events with expected annotations
Apr 22 23:20:10.805: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Apr 22 23:20:10.805: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:10.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-7997" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":6,"skipped":933,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:26.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Apr 22 23:19:26.637: INFO: deploying iperf2 server
Apr 22 23:19:26.641: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Apr 22 23:19:26.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Apr 22 23:19:28.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 22 23:19:30.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 22 23:19:32.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786266366, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 22 23:19:34.660: INFO: waiting for iperf2 server endpoints
Apr 22 23:19:36.664: INFO: found iperf2 server endpoints
Apr 22 23:19:36.664: INFO: waiting for client pods to be running
Apr 22 23:19:40.669: INFO: all client pods are ready: 2 pods
Apr 22 23:19:40.672: INFO: server pod phase Running
Apr 22 23:19:40.672: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 23:19:26 +0000 UTC Reason: Message:}
Apr 22 23:19:40.672: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 23:19:33 +0000 UTC Reason: Message:}
Apr 22 23:19:40.672: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 23:19:33 +0000 UTC Reason: Message:}
Apr 22 23:19:40.672: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 23:19:26 +0000 UTC Reason: Message:}
Apr 22 23:19:40.672: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2022-04-22 23:19:32 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://3ec7f377c6f524d762eca374f10f1870d69f6ba76993e59ba349dfe16a94297e Started:0xc004635d5c}
Apr 22 23:19:40.672: INFO: found 2 matching client pods
Apr 22 23:19:40.675: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-2855 PodName:iperf2-clients-mnhr6 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:40.675: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:41.145: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Apr 22 23:19:41.145: INFO: iperf version: 
Apr 22 23:19:41.145: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-mnhr6 (node node1)
Apr 22 23:19:41.148: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-2855 PodName:iperf2-clients-mnhr6 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:41.148: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:56.285: INFO: Exec stderr: ""
Apr 22 23:19:56.286: INFO: output from exec on client pod iperf2-clients-mnhr6 (node node1): 
20220422231942.271,10.244.3.119,43262,10.233.48.239,6789,3,0.0-1.0,3398434816,27187478528
20220422231943.261,10.244.3.119,43262,10.233.48.239,6789,3,1.0-2.0,3370254336,26962034688
20220422231944.268,10.244.3.119,43262,10.233.48.239,6789,3,2.0-3.0,3357540352,26860322816
20220422231945.254,10.244.3.119,43262,10.233.48.239,6789,3,3.0-4.0,2882142208,23057137664
20220422231946.260,10.244.3.119,43262,10.233.48.239,6789,3,4.0-5.0,2529296384,20234371072
20220422231947.266,10.244.3.119,43262,10.233.48.239,6789,3,5.0-6.0,2321809408,18574475264
20220422231948.252,10.244.3.119,43262,10.233.48.239,6789,3,6.0-7.0,1975648256,15805186048
20220422231949.258,10.244.3.119,43262,10.233.48.239,6789,3,7.0-8.0,3409838080,27278704640
20220422231950.264,10.244.3.119,43262,10.233.48.239,6789,3,8.0-9.0,3480354816,27842838528
20220422231951.251,10.244.3.119,43262,10.233.48.239,6789,3,9.0-10.0,3553886208,28431089664
20220422231951.251,10.244.3.119,43262,10.233.48.239,6789,3,0.0-10.0,30279204864,24223300910

Apr 22 23:19:56.288: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-2855 PodName:iperf2-clients-wrlqg ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:56.288: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:56.490: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Apr 22 23:19:56.490: INFO: iperf version: 
Apr 22 23:19:56.490: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-wrlqg (node node2)
Apr 22 23:19:56.493: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-2855 PodName:iperf2-clients-wrlqg ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:56.493: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:12.143: INFO: Exec stderr: ""
Apr 22 23:20:12.143: INFO: output from exec on client pod iperf2-clients-wrlqg (node node2): 
20220422231958.109,10.244.4.109,59692,10.233.48.239,6789,3,0.0-1.0,118489088,947912704
20220422231959.106,10.244.4.109,59692,10.233.48.239,6789,3,1.0-2.0,118751232,950009856
20220422232000.095,10.244.4.109,59692,10.233.48.239,6789,3,2.0-3.0,117047296,936378368
20220422232001.105,10.244.4.109,59692,10.233.48.239,6789,3,3.0-4.0,116785152,934281216
20220422232002.095,10.244.4.109,59692,10.233.48.239,6789,3,4.0-5.0,116916224,935329792
20220422232003.104,10.244.4.109,59692,10.233.48.239,6789,3,5.0-6.0,117571584,940572672
20220422232004.094,10.244.4.109,59692,10.233.48.239,6789,3,6.0-7.0,116129792,929038336
20220422232005.101,10.244.4.109,59692,10.233.48.239,6789,3,7.0-8.0,118095872,944766976
20220422232006.109,10.244.4.109,59692,10.233.48.239,6789,3,8.0-9.0,117964800,943718400
20220422232007.103,10.244.4.109,59692,10.233.48.239,6789,3,9.0-10.0,115736576,925892608
20220422232007.103,10.244.4.109,59692,10.233.48.239,6789,3,0.0-10.0,1173487616,938116431

Apr 22 23:20:12.143: INFO:                                From                                 To    Bandwidth (MB/s)
Apr 22 23:20:12.143: INFO:                               node1                              node1                2888
Apr 22 23:20:12.143: INFO:                               node2                              node1                 112
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:12.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-2855" for this suite.


• [SLOW TEST:45.544 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":1,"skipped":73,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:40.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-6426
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:45.292: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:45.416: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:47.419: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:49.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:51.419: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:53.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:55.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:57.419: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:59.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:01.418: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:03.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:05.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:07.420: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:20:07.425: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:20:15.470: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:20:15.470: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:20:15.479: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:15.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6426" for this suite.


S [SKIPPING] [35.315 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Apr 22 23:20:15.603: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:20:12.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Apr 22 23:20:12.444: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-8911" to be "Succeeded or Failed"
Apr 22 23:20:12.446: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.498534ms
Apr 22 23:20:14.453: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009146732s
Apr 22 23:20:16.457: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013355343s
STEP: Saw pod success
Apr 22 23:20:16.457: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:16.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8911" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":2,"skipped":153,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Apr 22 23:20:16.468: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:20:10.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3121.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3121.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3121.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3121.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 22 23:20:16.986: INFO: DNS probes using dns-3121/dns-test-47cf304a-1c3b-4cae-bd5b-2690c71feca6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3121" for this suite.


• [SLOW TEST:6.096 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":7,"skipped":968,"failed":0}
Apr 22 23:20:17.002: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:59.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Apr 22 23:18:59.992: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:01.996: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:03.996: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:05.998: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:07.996: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:09.997: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:11.997: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:13.997: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Apr 22 23:19:14.017: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:16.022: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:18.020: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:20.022: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:22.020: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Apr 22 23:20:22.074: INFO: boom-server pod logs: 2022/04/22 23:19:07 external ip: 10.244.4.100
2022/04/22 23:19:07 listen on 0.0.0.0:9000
2022/04/22 23:19:07 probing 10.244.4.100
2022/04/22 23:19:19 tcp packet: &{SrcPort:41266 DestPort:9000 Seq:2178459203 Ack:0 Flags:40962 WindowSize:29200 Checksum:36714 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:19 tcp packet: &{SrcPort:41266 DestPort:9000 Seq:2178459204 Ack:3209763378 Flags:32784 WindowSize:229 Checksum:12521 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:19 connection established
2022/04/22 23:19:19 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 161 50 191 79 147 146 129 216 166 68 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:19 checksumer: &{sum:470938 oddByte:33 length:39}
2022/04/22 23:19:19 ret:  470971
2022/04/22 23:19:19 ret:  12226
2022/04/22 23:19:19 ret:  12226
2022/04/22 23:19:19 boom packet injected
2022/04/22 23:19:19 tcp packet: &{SrcPort:41266 DestPort:9000 Seq:2178459204 Ack:3209763378 Flags:32785 WindowSize:229 Checksum:12520 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:21 tcp packet: &{SrcPort:39489 DestPort:9000 Seq:917071732 Ack:0 Flags:40962 WindowSize:29200 Checksum:6282 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:21 tcp packet: &{SrcPort:39489 DestPort:9000 Seq:917071733 Ack:479677462 Flags:32784 WindowSize:229 Checksum:8974 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:21 connection established
2022/04/22 23:19:21 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 154 65 28 149 197 118 54 169 103 117 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:21 checksumer: &{sum:485784 oddByte:33 length:39}
2022/04/22 23:19:21 ret:  485817
2022/04/22 23:19:21 ret:  27072
2022/04/22 23:19:21 ret:  27072
2022/04/22 23:19:21 boom packet injected
2022/04/22 23:19:21 tcp packet: &{SrcPort:39489 DestPort:9000 Seq:917071733 Ack:479677462 Flags:32785 WindowSize:229 Checksum:8973 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:23 tcp packet: &{SrcPort:41081 DestPort:9000 Seq:884626153 Ack:0 Flags:40962 WindowSize:29200 Checksum:8443 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:23 tcp packet: &{SrcPort:41081 DestPort:9000 Seq:884626154 Ack:464295558 Flags:32784 WindowSize:229 Checksum:55848 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:23 connection established
2022/04/22 23:19:23 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 160 121 27 171 15 230 52 186 82 234 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:23 checksumer: &{sum:568528 oddByte:33 length:39}
2022/04/22 23:19:23 ret:  568561
2022/04/22 23:19:23 ret:  44281
2022/04/22 23:19:23 ret:  44281
2022/04/22 23:19:23 boom packet injected
2022/04/22 23:19:23 tcp packet: &{SrcPort:41081 DestPort:9000 Seq:884626154 Ack:464295558 Flags:32785 WindowSize:229 Checksum:55847 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:25 tcp packet: &{SrcPort:33721 DestPort:9000 Seq:1422611823 Ack:0 Flags:40962 WindowSize:29200 Checksum:4947 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:25 tcp packet: &{SrcPort:33721 DestPort:9000 Seq:1422611824 Ack:650128755 Flags:32784 WindowSize:229 Checksum:8880 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:25 connection established
2022/04/22 23:19:25 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 131 185 38 190 166 211 84 203 85 112 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:25 checksumer: &{sum:558200 oddByte:33 length:39}
2022/04/22 23:19:25 ret:  558233
2022/04/22 23:19:25 ret:  33953
2022/04/22 23:19:25 ret:  33953
2022/04/22 23:19:25 boom packet injected
2022/04/22 23:19:25 tcp packet: &{SrcPort:33721 DestPort:9000 Seq:1422611824 Ack:650128755 Flags:32785 WindowSize:229 Checksum:8879 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:27 tcp packet: &{SrcPort:42034 DestPort:9000 Seq:3212736183 Ack:0 Flags:40962 WindowSize:29200 Checksum:24334 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:27 tcp packet: &{SrcPort:42034 DestPort:9000 Seq:3212736184 Ack:1321136866 Flags:32784 WindowSize:229 Checksum:31020 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:27 connection established
2022/04/22 23:19:27 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 164 50 78 189 108 66 191 126 118 184 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:27 checksumer: &{sum:485139 oddByte:33 length:39}
2022/04/22 23:19:27 ret:  485172
2022/04/22 23:19:27 ret:  26427
2022/04/22 23:19:27 ret:  26427
2022/04/22 23:19:27 boom packet injected
2022/04/22 23:19:27 tcp packet: &{SrcPort:42034 DestPort:9000 Seq:3212736184 Ack:1321136866 Flags:32785 WindowSize:229 Checksum:31019 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:29 tcp packet: &{SrcPort:41266 DestPort:9000 Seq:2178459205 Ack:3209763379 Flags:32784 WindowSize:229 Checksum:58053 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:29 tcp packet: &{SrcPort:46806 DestPort:9000 Seq:3711109347 Ack:0 Flags:40962 WindowSize:29200 Checksum:38072 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:29 tcp packet: &{SrcPort:46806 DestPort:9000 Seq:3711109348 Ack:694079614 Flags:32784 WindowSize:229 Checksum:61130 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:29 connection established
2022/04/22 23:19:29 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 182 214 41 93 73 222 221 51 8 228 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:29 checksumer: &{sum:534413 oddByte:33 length:39}
2022/04/22 23:19:29 ret:  534446
2022/04/22 23:19:29 ret:  10166
2022/04/22 23:19:29 ret:  10166
2022/04/22 23:19:29 boom packet injected
2022/04/22 23:19:29 tcp packet: &{SrcPort:46806 DestPort:9000 Seq:3711109348 Ack:694079614 Flags:32785 WindowSize:229 Checksum:61129 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:31 tcp packet: &{SrcPort:39489 DestPort:9000 Seq:917071734 Ack:479677463 Flags:32784 WindowSize:229 Checksum:54506 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:31 tcp packet: &{SrcPort:34691 DestPort:9000 Seq:2119196134 Ack:0 Flags:40962 WindowSize:29200 Checksum:50715 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:31 tcp packet: &{SrcPort:34691 DestPort:9000 Seq:2119196135 Ack:3561127424 Flags:32784 WindowSize:229 Checksum:49142 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:31 connection established
2022/04/22 23:19:31 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 135 131 212 64 247 96 126 80 93 231 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:31 checksumer: &{sum:481965 oddByte:33 length:39}
2022/04/22 23:19:31 ret:  481998
2022/04/22 23:19:31 ret:  23253
2022/04/22 23:19:31 ret:  23253
2022/04/22 23:19:31 boom packet injected
2022/04/22 23:19:31 tcp packet: &{SrcPort:34691 DestPort:9000 Seq:2119196135 Ack:3561127424 Flags:32785 WindowSize:229 Checksum:49141 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:33 tcp packet: &{SrcPort:41081 DestPort:9000 Seq:884626155 Ack:464295559 Flags:32784 WindowSize:229 Checksum:35845 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:33 tcp packet: &{SrcPort:37803 DestPort:9000 Seq:4072521725 Ack:0 Flags:40962 WindowSize:29200 Checksum:56221 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:33 tcp packet: &{SrcPort:37803 DestPort:9000 Seq:4072521726 Ack:1031567942 Flags:32784 WindowSize:229 Checksum:26665 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:33 connection established
2022/04/22 23:19:33 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 147 171 61 122 243 166 242 189 191 254 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:33 checksumer: &{sum:558836 oddByte:33 length:39}
2022/04/22 23:19:33 ret:  558869
2022/04/22 23:19:33 ret:  34589
2022/04/22 23:19:33 ret:  34589
2022/04/22 23:19:33 boom packet injected
2022/04/22 23:19:33 tcp packet: &{SrcPort:37803 DestPort:9000 Seq:4072521726 Ack:1031567942 Flags:32785 WindowSize:229 Checksum:26664 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:35 tcp packet: &{SrcPort:33721 DestPort:9000 Seq:1422611825 Ack:650128756 Flags:32784 WindowSize:229 Checksum:54411 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:35 tcp packet: &{SrcPort:43797 DestPort:9000 Seq:597898059 Ack:0 Flags:40962 WindowSize:29200 Checksum:6193 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:35 tcp packet: &{SrcPort:43797 DestPort:9000 Seq:597898060 Ack:3503262604 Flags:32784 WindowSize:229 Checksum:63569 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:35 connection established
2022/04/22 23:19:35 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 171 21 208 206 4 236 35 163 51 76 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:35 checksumer: &{sum:507221 oddByte:33 length:39}
2022/04/22 23:19:35 ret:  507254
2022/04/22 23:19:35 ret:  48509
2022/04/22 23:19:35 ret:  48509
2022/04/22 23:19:35 boom packet injected
2022/04/22 23:19:35 tcp packet: &{SrcPort:43797 DestPort:9000 Seq:597898060 Ack:3503262604 Flags:32785 WindowSize:229 Checksum:63568 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:37 tcp packet: &{SrcPort:42034 DestPort:9000 Seq:3212736185 Ack:1321136867 Flags:32784 WindowSize:229 Checksum:11017 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:38 tcp packet: &{SrcPort:44663 DestPort:9000 Seq:2334767876 Ack:0 Flags:40962 WindowSize:29200 Checksum:7573 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:38 tcp packet: &{SrcPort:44663 DestPort:9000 Seq:2334767877 Ack:3700797196 Flags:32784 WindowSize:229 Checksum:50805 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:38 connection established
2022/04/22 23:19:38 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 174 119 220 148 40 108 139 41 187 5 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:38 checksumer: &{sum:435576 oddByte:33 length:39}
2022/04/22 23:19:38 ret:  435609
2022/04/22 23:19:38 ret:  42399
2022/04/22 23:19:38 ret:  42399
2022/04/22 23:19:38 boom packet injected
2022/04/22 23:19:38 tcp packet: &{SrcPort:44663 DestPort:9000 Seq:2334767877 Ack:3700797196 Flags:32785 WindowSize:229 Checksum:50804 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:39 tcp packet: &{SrcPort:46806 DestPort:9000 Seq:3711109349 Ack:694079615 Flags:32784 WindowSize:229 Checksum:41126 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:39 tcp packet: &{SrcPort:44962 DestPort:9000 Seq:2327554211 Ack:0 Flags:40962 WindowSize:29200 Checksum:10129 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:39 tcp packet: &{SrcPort:44962 DestPort:9000 Seq:2327554212 Ack:1372019518 Flags:32784 WindowSize:229 Checksum:42855 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:39 connection established
2022/04/22 23:19:39 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 175 162 81 197 212 158 138 187 168 164 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:39 checksumer: &{sum:550022 oddByte:33 length:39}
2022/04/22 23:19:39 ret:  550055
2022/04/22 23:19:39 ret:  25775
2022/04/22 23:19:39 ret:  25775
2022/04/22 23:19:39 boom packet injected
2022/04/22 23:19:39 tcp packet: &{SrcPort:44962 DestPort:9000 Seq:2327554212 Ack:1372019518 Flags:32785 WindowSize:229 Checksum:42854 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:41 tcp packet: &{SrcPort:34691 DestPort:9000 Seq:2119196136 Ack:3561127425 Flags:32784 WindowSize:229 Checksum:29139 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:41 tcp packet: &{SrcPort:41084 DestPort:9000 Seq:2999699488 Ack:0 Flags:40962 WindowSize:29200 Checksum:59225 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:41 tcp packet: &{SrcPort:41084 DestPort:9000 Seq:2999699489 Ack:1107206748 Flags:32784 WindowSize:229 Checksum:10251 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:41 connection established
2022/04/22 23:19:41 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 160 124 65 253 27 188 178 203 200 33 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:41 checksumer: &{sum:532726 oddByte:33 length:39}
2022/04/22 23:19:41 ret:  532759
2022/04/22 23:19:41 ret:  8479
2022/04/22 23:19:41 ret:  8479
2022/04/22 23:19:41 boom packet injected
2022/04/22 23:19:41 tcp packet: &{SrcPort:41084 DestPort:9000 Seq:2999699489 Ack:1107206748 Flags:32785 WindowSize:229 Checksum:10250 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:43 tcp packet: &{SrcPort:37803 DestPort:9000 Seq:4072521727 Ack:1031567943 Flags:32784 WindowSize:229 Checksum:6662 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:43 tcp packet: &{SrcPort:40761 DestPort:9000 Seq:3256605424 Ack:0 Flags:40962 WindowSize:29200 Checksum:48811 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:43 tcp packet: &{SrcPort:40761 DestPort:9000 Seq:3256605425 Ack:3146476662 Flags:32784 WindowSize:229 Checksum:46051 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:43 connection established
2022/04/22 23:19:43 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 159 57 187 137 229 214 194 27 218 241 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:43 checksumer: &{sum:501083 oddByte:33 length:39}
2022/04/22 23:19:43 ret:  501116
2022/04/22 23:19:43 ret:  42371
2022/04/22 23:19:43 ret:  42371
2022/04/22 23:19:43 boom packet injected
2022/04/22 23:19:43 tcp packet: &{SrcPort:40761 DestPort:9000 Seq:3256605425 Ack:3146476662 Flags:32785 WindowSize:229 Checksum:46050 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:45 tcp packet: &{SrcPort:43797 DestPort:9000 Seq:597898061 Ack:3503262605 Flags:32784 WindowSize:229 Checksum:43566 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:45 tcp packet: &{SrcPort:33123 DestPort:9000 Seq:3704636855 Ack:0 Flags:40962 WindowSize:29200 Checksum:20278 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:45 tcp packet: &{SrcPort:33123 DestPort:9000 Seq:3704636856 Ack:2664098310 Flags:32784 WindowSize:229 Checksum:56270 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:45 connection established
2022/04/22 23:19:45 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 129 99 158 201 99 102 220 208 69 184 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:45 checksumer: &{sum:530979 oddByte:33 length:39}
2022/04/22 23:19:45 ret:  531012
2022/04/22 23:19:45 ret:  6732
2022/04/22 23:19:45 ret:  6732
2022/04/22 23:19:45 boom packet injected
2022/04/22 23:19:45 tcp packet: &{SrcPort:33123 DestPort:9000 Seq:3704636856 Ack:2664098310 Flags:32785 WindowSize:229 Checksum:56269 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:47 tcp packet: &{SrcPort:37234 DestPort:9000 Seq:1754552120 Ack:0 Flags:40962 WindowSize:29200 Checksum:39441 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:47 tcp packet: &{SrcPort:37234 DestPort:9000 Seq:1754552121 Ack:1666802446 Flags:32784 WindowSize:229 Checksum:57667 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:47 connection established
2022/04/22 23:19:47 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 145 114 99 87 220 110 104 148 87 57 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:47 checksumer: &{sum:459791 oddByte:33 length:39}
2022/04/22 23:19:47 ret:  459824
2022/04/22 23:19:47 ret:  1079
2022/04/22 23:19:47 ret:  1079
2022/04/22 23:19:47 boom packet injected
2022/04/22 23:19:47 tcp packet: &{SrcPort:37234 DestPort:9000 Seq:1754552121 Ack:1666802446 Flags:32785 WindowSize:229 Checksum:57666 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:48 tcp packet: &{SrcPort:44663 DestPort:9000 Seq:2334767878 Ack:3700797197 Flags:32784 WindowSize:229 Checksum:30802 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:49 tcp packet: &{SrcPort:41368 DestPort:9000 Seq:4271974349 Ack:0 Flags:40962 WindowSize:29200 Checksum:7033 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:49 tcp packet: &{SrcPort:41368 DestPort:9000 Seq:4271974350 Ack:563047954 Flags:32784 WindowSize:229 Checksum:37281 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:49 connection established
2022/04/22 23:19:49 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 161 152 33 141 231 114 254 161 39 206 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:49 checksumer: &{sum:525902 oddByte:33 length:39}
2022/04/22 23:19:49 ret:  525935
2022/04/22 23:19:49 ret:  1655
2022/04/22 23:19:49 ret:  1655
2022/04/22 23:19:49 boom packet injected
2022/04/22 23:19:49 tcp packet: &{SrcPort:41368 DestPort:9000 Seq:4271974350 Ack:563047954 Flags:32785 WindowSize:229 Checksum:37280 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:49 tcp packet: &{SrcPort:44962 DestPort:9000 Seq:2327554213 Ack:1372019519 Flags:32784 WindowSize:229 Checksum:22843 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:51 tcp packet: &{SrcPort:41084 DestPort:9000 Seq:2999699490 Ack:1107206749 Flags:32784 WindowSize:229 Checksum:55782 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:51 tcp packet: &{SrcPort:44221 DestPort:9000 Seq:4190571935 Ack:0 Flags:40962 WindowSize:29200 Checksum:10123 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:51 tcp packet: &{SrcPort:44221 DestPort:9000 Seq:4190571936 Ack:3631574605 Flags:32784 WindowSize:229 Checksum:57024 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:51 connection established
2022/04/22 23:19:51 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 172 189 216 115 231 173 249 199 13 160 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:51 checksumer: &{sum:541937 oddByte:33 length:39}
2022/04/22 23:19:51 ret:  541970
2022/04/22 23:19:51 ret:  17690
2022/04/22 23:19:51 ret:  17690
2022/04/22 23:19:51 boom packet injected
2022/04/22 23:19:51 tcp packet: &{SrcPort:44221 DestPort:9000 Seq:4190571936 Ack:3631574605 Flags:32785 WindowSize:229 Checksum:57023 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:53 tcp packet: &{SrcPort:40761 DestPort:9000 Seq:3256605426 Ack:3146476663 Flags:32784 WindowSize:229 Checksum:26048 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:53 tcp packet: &{SrcPort:38173 DestPort:9000 Seq:563827152 Ack:0 Flags:40962 WindowSize:29200 Checksum:52053 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:53 tcp packet: &{SrcPort:38173 DestPort:9000 Seq:563827153 Ack:1487926721 Flags:32784 WindowSize:229 Checksum:29452 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:53 connection established
2022/04/22 23:19:53 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 149 29 88 174 111 33 33 155 81 209 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:53 checksumer: &{sum:481102 oddByte:33 length:39}
2022/04/22 23:19:53 ret:  481135
2022/04/22 23:19:53 ret:  22390
2022/04/22 23:19:53 ret:  22390
2022/04/22 23:19:53 boom packet injected
2022/04/22 23:19:53 tcp packet: &{SrcPort:38173 DestPort:9000 Seq:563827153 Ack:1487926721 Flags:32785 WindowSize:229 Checksum:29451 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:55 tcp packet: &{SrcPort:33123 DestPort:9000 Seq:3704636857 Ack:2664098311 Flags:32784 WindowSize:229 Checksum:36267 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:55 tcp packet: &{SrcPort:43781 DestPort:9000 Seq:562615509 Ack:0 Flags:40962 WindowSize:29200 Checksum:10923 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:55 tcp packet: &{SrcPort:43781 DestPort:9000 Seq:562615510 Ack:811935098 Flags:32784 WindowSize:229 Checksum:49955 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:55 connection established
2022/04/22 23:19:55 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 171 5 48 99 158 218 33 136 212 214 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:55 checksumer: &{sum:499694 oddByte:33 length:39}
2022/04/22 23:19:55 ret:  499727
2022/04/22 23:19:55 ret:  40982
2022/04/22 23:19:55 ret:  40982
2022/04/22 23:19:55 boom packet injected
2022/04/22 23:19:55 tcp packet: &{SrcPort:43781 DestPort:9000 Seq:562615510 Ack:811935098 Flags:32785 WindowSize:229 Checksum:49954 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:57 tcp packet: &{SrcPort:37234 DestPort:9000 Seq:1754552122 Ack:1666802447 Flags:32784 WindowSize:229 Checksum:37664 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:57 tcp packet: &{SrcPort:46751 DestPort:9000 Seq:1659023229 Ack:0 Flags:40962 WindowSize:29200 Checksum:64317 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:19:57 tcp packet: &{SrcPort:46751 DestPort:9000 Seq:1659023230 Ack:3429305633 Flags:32784 WindowSize:229 Checksum:2107 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:57 connection established
2022/04/22 23:19:57 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 182 159 204 101 134 129 98 226 175 126 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:19:57 checksumer: &{sum:517529 oddByte:33 length:39}
2022/04/22 23:19:57 ret:  517562
2022/04/22 23:19:57 ret:  58817
2022/04/22 23:19:57 ret:  58817
2022/04/22 23:19:57 boom packet injected
2022/04/22 23:19:57 tcp packet: &{SrcPort:46751 DestPort:9000 Seq:1659023230 Ack:3429305633 Flags:32785 WindowSize:229 Checksum:2106 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:19:59 tcp packet: &{SrcPort:41368 DestPort:9000 Seq:4271974351 Ack:563047955 Flags:32784 WindowSize:229 Checksum:17277 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:00 tcp packet: &{SrcPort:45036 DestPort:9000 Seq:3132662125 Ack:0 Flags:40962 WindowSize:29200 Checksum:45130 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:00 tcp packet: &{SrcPort:45036 DestPort:9000 Seq:3132662126 Ack:1888882783 Flags:32784 WindowSize:229 Checksum:3577 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:00 connection established
2022/04/22 23:20:00 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 175 236 112 148 137 191 186 184 161 110 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:00 checksumer: &{sum:550275 oddByte:33 length:39}
2022/04/22 23:20:00 ret:  550308
2022/04/22 23:20:00 ret:  26028
2022/04/22 23:20:00 ret:  26028
2022/04/22 23:20:00 boom packet injected
2022/04/22 23:20:00 tcp packet: &{SrcPort:45036 DestPort:9000 Seq:3132662126 Ack:1888882783 Flags:32785 WindowSize:229 Checksum:3576 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:01 tcp packet: &{SrcPort:44221 DestPort:9000 Seq:4190571937 Ack:3631574606 Flags:32784 WindowSize:229 Checksum:37019 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:02 tcp packet: &{SrcPort:34112 DestPort:9000 Seq:2283396859 Ack:0 Flags:40962 WindowSize:29200 Checksum:51253 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:02 tcp packet: &{SrcPort:34112 DestPort:9000 Seq:2283396860 Ack:2596737294 Flags:32784 WindowSize:229 Checksum:62257 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:02 connection established
2022/04/22 23:20:02 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 133 64 154 197 138 110 136 25 222 252 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:02 checksumer: &{sum:493711 oddByte:33 length:39}
2022/04/22 23:20:02 ret:  493744
2022/04/22 23:20:02 ret:  34999
2022/04/22 23:20:02 ret:  34999
2022/04/22 23:20:02 boom packet injected
2022/04/22 23:20:02 tcp packet: &{SrcPort:34112 DestPort:9000 Seq:2283396860 Ack:2596737294 Flags:32785 WindowSize:229 Checksum:62256 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:03 tcp packet: &{SrcPort:38173 DestPort:9000 Seq:563827154 Ack:1487926722 Flags:32784 WindowSize:229 Checksum:9445 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:04 tcp packet: &{SrcPort:34261 DestPort:9000 Seq:1614881842 Ack:0 Flags:40962 WindowSize:29200 Checksum:41582 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:04 tcp packet: &{SrcPort:34261 DestPort:9000 Seq:1614881843 Ack:2849790526 Flags:32784 WindowSize:229 Checksum:27978 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:04 connection established
2022/04/22 23:20:04 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 133 213 169 218 211 158 96 65 36 51 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:04 checksumer: &{sum:508165 oddByte:33 length:39}
2022/04/22 23:20:04 ret:  508198
2022/04/22 23:20:04 ret:  49453
2022/04/22 23:20:04 ret:  49453
2022/04/22 23:20:04 boom packet injected
2022/04/22 23:20:04 tcp packet: &{SrcPort:34261 DestPort:9000 Seq:1614881843 Ack:2849790526 Flags:32785 WindowSize:229 Checksum:27977 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:05 tcp packet: &{SrcPort:43781 DestPort:9000 Seq:562615511 Ack:811935099 Flags:32784 WindowSize:229 Checksum:29946 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:06 tcp packet: &{SrcPort:46272 DestPort:9000 Seq:2170697032 Ack:0 Flags:40962 WindowSize:29200 Checksum:14714 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:06 tcp packet: &{SrcPort:46272 DestPort:9000 Seq:2170697033 Ack:3565624047 Flags:32784 WindowSize:229 Checksum:4392 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:06 connection established
2022/04/22 23:20:06 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 180 192 212 133 148 79 129 98 53 73 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:06 checksumer: &{sum:474962 oddByte:33 length:39}
2022/04/22 23:20:06 ret:  474995
2022/04/22 23:20:06 ret:  16250
2022/04/22 23:20:06 ret:  16250
2022/04/22 23:20:06 boom packet injected
2022/04/22 23:20:06 tcp packet: &{SrcPort:46272 DestPort:9000 Seq:2170697033 Ack:3565624047 Flags:32785 WindowSize:229 Checksum:4391 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:07 tcp packet: &{SrcPort:46751 DestPort:9000 Seq:1659023231 Ack:3429305634 Flags:32784 WindowSize:229 Checksum:47638 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:08 tcp packet: &{SrcPort:36914 DestPort:9000 Seq:2390732145 Ack:0 Flags:40962 WindowSize:29200 Checksum:53497 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:08 tcp packet: &{SrcPort:36914 DestPort:9000 Seq:2390732146 Ack:4112694007 Flags:32784 WindowSize:229 Checksum:57413 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:08 connection established
2022/04/22 23:20:08 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 144 50 245 33 52 87 142 127 173 114 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:08 checksumer: &{sum:433012 oddByte:33 length:39}
2022/04/22 23:20:08 ret:  433045
2022/04/22 23:20:08 ret:  39835
2022/04/22 23:20:08 ret:  39835
2022/04/22 23:20:08 boom packet injected
2022/04/22 23:20:08 tcp packet: &{SrcPort:36914 DestPort:9000 Seq:2390732146 Ack:4112694007 Flags:32785 WindowSize:229 Checksum:57412 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:10 tcp packet: &{SrcPort:39575 DestPort:9000 Seq:250325539 Ack:0 Flags:40962 WindowSize:29200 Checksum:16807 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:10 tcp packet: &{SrcPort:39575 DestPort:9000 Seq:250325540 Ack:3310267535 Flags:32784 WindowSize:229 Checksum:34655 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:10 connection established
2022/04/22 23:20:10 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 154 151 197 77 37 239 14 235 170 36 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:10 checksumer: &{sum:516540 oddByte:33 length:39}
2022/04/22 23:20:10 ret:  516573
2022/04/22 23:20:10 ret:  57828
2022/04/22 23:20:10 ret:  57828
2022/04/22 23:20:10 boom packet injected
2022/04/22 23:20:10 tcp packet: &{SrcPort:39575 DestPort:9000 Seq:250325540 Ack:3310267535 Flags:32785 WindowSize:229 Checksum:34654 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:10 tcp packet: &{SrcPort:45036 DestPort:9000 Seq:3132662127 Ack:1888882784 Flags:32784 WindowSize:229 Checksum:49107 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:12 tcp packet: &{SrcPort:38482 DestPort:9000 Seq:1025917039 Ack:0 Flags:40962 WindowSize:29200 Checksum:31124 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:12 tcp packet: &{SrcPort:38482 DestPort:9000 Seq:1025917040 Ack:2256503375 Flags:32784 WindowSize:229 Checksum:8331 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:12 connection established
2022/04/22 23:20:12 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 150 82 134 125 251 175 61 38 64 112 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:12 checksumer: &{sum:463892 oddByte:33 length:39}
2022/04/22 23:20:12 ret:  463925
2022/04/22 23:20:12 ret:  5180
2022/04/22 23:20:12 ret:  5180
2022/04/22 23:20:12 boom packet injected
2022/04/22 23:20:12 tcp packet: &{SrcPort:38482 DestPort:9000 Seq:1025917040 Ack:2256503375 Flags:32785 WindowSize:229 Checksum:8330 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:12 tcp packet: &{SrcPort:34112 DestPort:9000 Seq:2283396861 Ack:2596737295 Flags:32784 WindowSize:229 Checksum:42252 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:14 tcp packet: &{SrcPort:41427 DestPort:9000 Seq:3756681568 Ack:0 Flags:40962 WindowSize:29200 Checksum:39564 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:14 tcp packet: &{SrcPort:41427 DestPort:9000 Seq:3756681569 Ack:511552739 Flags:32784 WindowSize:229 Checksum:30496 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:14 connection established
2022/04/22 23:20:14 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 161 211 30 124 38 67 223 234 105 97 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:14 checksumer: &{sum:515245 oddByte:33 length:39}
2022/04/22 23:20:14 ret:  515278
2022/04/22 23:20:14 ret:  56533
2022/04/22 23:20:14 ret:  56533
2022/04/22 23:20:14 boom packet injected
2022/04/22 23:20:14 tcp packet: &{SrcPort:41427 DestPort:9000 Seq:3756681569 Ack:511552739 Flags:32785 WindowSize:229 Checksum:30495 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:14 tcp packet: &{SrcPort:34261 DestPort:9000 Seq:1614881844 Ack:2849790527 Flags:32784 WindowSize:229 Checksum:7966 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:16 tcp packet: &{SrcPort:40346 DestPort:9000 Seq:735637873 Ack:0 Flags:40962 WindowSize:29200 Checksum:49909 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:16 tcp packet: &{SrcPort:40346 DestPort:9000 Seq:735637874 Ack:157078559 Flags:32784 WindowSize:229 Checksum:34205 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:16 connection established
2022/04/22 23:20:16 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 157 154 9 91 77 127 43 216 241 114 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:16 checksumer: &{sum:507279 oddByte:33 length:39}
2022/04/22 23:20:16 ret:  507312
2022/04/22 23:20:16 ret:  48567
2022/04/22 23:20:16 ret:  48567
2022/04/22 23:20:16 boom packet injected
2022/04/22 23:20:16 tcp packet: &{SrcPort:40346 DestPort:9000 Seq:735637874 Ack:157078559 Flags:32785 WindowSize:229 Checksum:34204 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:16 tcp packet: &{SrcPort:46272 DestPort:9000 Seq:2170697034 Ack:3565624048 Flags:32784 WindowSize:229 Checksum:49912 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:18 tcp packet: &{SrcPort:36914 DestPort:9000 Seq:2390732147 Ack:4112694008 Flags:32784 WindowSize:229 Checksum:37410 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:18 tcp packet: &{SrcPort:45560 DestPort:9000 Seq:1474728342 Ack:0 Flags:40962 WindowSize:29200 Checksum:55955 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:18 tcp packet: &{SrcPort:45560 DestPort:9000 Seq:1474728343 Ack:1986400765 Flags:32784 WindowSize:229 Checksum:60034 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:18 connection established
2022/04/22 23:20:18 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 177 248 118 100 139 93 87 230 145 151 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:18 checksumer: &{sum:538138 oddByte:33 length:39}
2022/04/22 23:20:18 ret:  538171
2022/04/22 23:20:18 ret:  13891
2022/04/22 23:20:18 ret:  13891
2022/04/22 23:20:18 boom packet injected
2022/04/22 23:20:18 tcp packet: &{SrcPort:45560 DestPort:9000 Seq:1474728343 Ack:1986400765 Flags:32785 WindowSize:229 Checksum:60033 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:20 tcp packet: &{SrcPort:39575 DestPort:9000 Seq:250325541 Ack:3310267536 Flags:32784 WindowSize:229 Checksum:14652 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:20 tcp packet: &{SrcPort:34276 DestPort:9000 Seq:2404641800 Ack:0 Flags:40962 WindowSize:29200 Checksum:27895 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:20 tcp packet: &{SrcPort:34276 DestPort:9000 Seq:2404641801 Ack:3750594162 Flags:32784 WindowSize:229 Checksum:38777 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:20 connection established
2022/04/22 23:20:20 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 133 228 223 139 255 210 143 83 236 9 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:20 checksumer: &{sum:499294 oddByte:33 length:39}
2022/04/22 23:20:20 ret:  499327
2022/04/22 23:20:20 ret:  40582
2022/04/22 23:20:20 ret:  40582
2022/04/22 23:20:20 boom packet injected
2022/04/22 23:20:20 tcp packet: &{SrcPort:34276 DestPort:9000 Seq:2404641801 Ack:3750594162 Flags:32785 WindowSize:229 Checksum:38776 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:22 tcp packet: &{SrcPort:38482 DestPort:9000 Seq:1025917041 Ack:2256503376 Flags:32784 WindowSize:229 Checksum:53862 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:22 tcp packet: &{SrcPort:34854 DestPort:9000 Seq:1957479468 Ack:0 Flags:40962 WindowSize:29200 Checksum:42344 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.109
2022/04/22 23:20:22 tcp packet: &{SrcPort:34854 DestPort:9000 Seq:1957479469 Ack:3859981445 Flags:32784 WindowSize:229 Checksum:41857 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.109
2022/04/22 23:20:22 connection established
2022/04/22 23:20:22 calling checksumTCP: 10.244.4.100 10.244.3.109 [35 40 136 38 230 17 29 229 116 172 196 45 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2022/04/22 23:20:22 checksumer: &{sum:456003 oddByte:33 length:39}
2022/04/22 23:20:22 ret:  456036
2022/04/22 23:20:22 ret:  62826
2022/04/22 23:20:22 ret:  62826
2022/04/22 23:20:22 boom packet injected
2022/04/22 23:20:22 tcp packet: &{SrcPort:34854 DestPort:9000 Seq:1957479469 Ack:3859981445 Flags:32785 WindowSize:229 Checksum:41856 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.109

Apr 22 23:20:22.074: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:22.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-2311" for this suite.


• [SLOW TEST:82.130 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":4,"skipped":425,"failed":0}
Apr 22 23:20:22.084: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":2,"skipped":813,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:56.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-6946
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:56.527: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:56.561: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:58.565: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:00.566: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:02.565: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:04.564: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:06.564: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:08.565: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:10.564: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:12.565: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:14.566: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:16.564: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:18.565: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:20:18.572: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:20:24.608: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:20:24.608: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:20:24.616: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:24.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6946" for this suite.


S [SKIPPING] [28.222 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 22 23:20:24.628: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:57.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-5042
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:19:57.675: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:19:57.717: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:19:59.721: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:01.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:03.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:05.722: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:07.722: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:09.724: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:11.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:13.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:15.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:17.721: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:19.723: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:20:19.728: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:20:25.751: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:20:25.751: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:20:25.758: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:25.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5042" for this suite.


S [SKIPPING] [28.202 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 22 23:20:25.770: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:08.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-7729
Apr 22 23:18:08.852: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-7729
I0422 23:18:08.865831      30 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-7729, replica count: 2
I0422 23:18:11.917596      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:14.918165      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:17.918558      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:18:20.919590      30 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 22 23:18:20.919: INFO: Creating new exec pod
Apr 22 23:18:25.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Apr 22 23:18:26.171: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Apr 22 23:18:26.171: INFO: stdout: "nodeport-update-service-n6svj"
Apr 22 23:18:26.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.40.139 80'
Apr 22 23:18:26.438: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.40.139 80\nConnection to 10.233.40.139 80 port [tcp/http] succeeded!\n"
Apr 22 23:18:26.438: INFO: stdout: "nodeport-update-service-mcmjp"
Apr 22 23:18:26.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:26.711: INFO: rc: 1
Apr 22 23:18:26.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:27.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:29.100: INFO: rc: 1
Apr 22 23:18:29.100: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:29.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:29.984: INFO: rc: 1
Apr 22 23:18:29.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:30.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:31.535: INFO: rc: 1
Apr 22 23:18:31.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:31.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:31.969: INFO: rc: 1
Apr 22 23:18:31.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:32.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:32.972: INFO: rc: 1
Apr 22 23:18:32.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:33.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:34.253: INFO: rc: 1
Apr 22 23:18:34.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:34.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:35.104: INFO: rc: 1
Apr 22 23:18:35.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:35.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:36.392: INFO: rc: 1
Apr 22 23:18:36.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 32177
+ echo hostName
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:36.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:37.013: INFO: rc: 1
Apr 22 23:18:37.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:37.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:38.008: INFO: rc: 1
Apr 22 23:18:38.008: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:38.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:39.132: INFO: rc: 1
Apr 22 23:18:39.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:39.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:40.528: INFO: rc: 1
Apr 22 23:18:40.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:40.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:41.103: INFO: rc: 1
Apr 22 23:18:41.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:41.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:42.400: INFO: rc: 1
Apr 22 23:18:42.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:42.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:43.237: INFO: rc: 1
Apr 22 23:18:43.237: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:43.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:43.996: INFO: rc: 1
Apr 22 23:18:43.996: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:44.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:45.189: INFO: rc: 1
Apr 22 23:18:45.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:45.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:45.981: INFO: rc: 1
Apr 22 23:18:45.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:46.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:46.964: INFO: rc: 1
Apr 22 23:18:46.964: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:47.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:47.995: INFO: rc: 1
Apr 22 23:18:47.995: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:48.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:48.975: INFO: rc: 1
Apr 22 23:18:48.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:49.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:50.155: INFO: rc: 1
Apr 22 23:18:50.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:50.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:51.107: INFO: rc: 1
Apr 22 23:18:51.107: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:51.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:51.977: INFO: rc: 1
Apr 22 23:18:51.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:52.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:53.004: INFO: rc: 1
Apr 22 23:18:53.004: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:53.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:53.966: INFO: rc: 1
Apr 22 23:18:53.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:54.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:54.980: INFO: rc: 1
Apr 22 23:18:54.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:55.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:55.957: INFO: rc: 1
Apr 22 23:18:55.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:56.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:57.061: INFO: rc: 1
Apr 22 23:18:57.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:57.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:58.335: INFO: rc: 1
Apr 22 23:18:58.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:58.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:58.982: INFO: rc: 1
Apr 22 23:18:58.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:18:59.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:18:59.975: INFO: rc: 1
Apr 22 23:18:59.975: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:00.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:00.988: INFO: rc: 1
Apr 22 23:19:00.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:01.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:02.077: INFO: rc: 1
Apr 22 23:19:02.077: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:02.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:02.944: INFO: rc: 1
Apr 22 23:19:02.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:03.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:03.972: INFO: rc: 1
Apr 22 23:19:03.972: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:04.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:05.134: INFO: rc: 1
Apr 22 23:19:05.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:05.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:06.024: INFO: rc: 1
Apr 22 23:19:06.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 32177
+ echo hostName
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:06.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:06.980: INFO: rc: 1
Apr 22 23:19:06.980: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:07.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:07.968: INFO: rc: 1
Apr 22 23:19:07.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:08.966: INFO: rc: 1
Apr 22 23:19:08.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:09.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:10.199: INFO: rc: 1
Apr 22 23:19:10.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:10.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:11.189: INFO: rc: 1
Apr 22 23:19:11.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:11.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:11.988: INFO: rc: 1
Apr 22 23:19:11.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:12.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:13.357: INFO: rc: 1
Apr 22 23:19:13.357: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:13.977: INFO: rc: 1
Apr 22 23:19:13.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:14.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:15.101: INFO: rc: 1
Apr 22 23:19:15.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:15.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:16.098: INFO: rc: 1
Apr 22 23:19:16.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:16.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:17.090: INFO: rc: 1
Apr 22 23:19:17.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:17.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:17.987: INFO: rc: 1
Apr 22 23:19:17.987: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:18.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:19.094: INFO: rc: 1
Apr 22 23:19:19.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:19.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:19.988: INFO: rc: 1
Apr 22 23:19:19.988: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:20.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:20.947: INFO: rc: 1
Apr 22 23:19:20.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:21.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:21.957: INFO: rc: 1
Apr 22 23:19:21.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:22.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:22.983: INFO: rc: 1
Apr 22 23:19:22.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:23.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:23.969: INFO: rc: 1
Apr 22 23:19:23.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:24.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:24.962: INFO: rc: 1
Apr 22 23:19:24.962: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:25.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:25.981: INFO: rc: 1
Apr 22 23:19:25.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:26.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:27.086: INFO: rc: 1
Apr 22 23:19:27.086: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:27.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:28.032: INFO: rc: 1
Apr 22 23:19:28.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:28.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:29.439: INFO: rc: 1
Apr 22 23:19:29.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:29.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:30.297: INFO: rc: 1
Apr 22 23:19:30.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:30.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:31.417: INFO: rc: 1
Apr 22 23:19:31.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:31.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:32.073: INFO: rc: 1
Apr 22 23:19:32.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:32.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:33.116: INFO: rc: 1
Apr 22 23:19:33.116: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:33.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:33.958: INFO: rc: 1
Apr 22 23:19:33.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:34.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:35.167: INFO: rc: 1
Apr 22 23:19:35.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:35.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:36.619: INFO: rc: 1
Apr 22 23:19:36.620: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:36.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:36.983: INFO: rc: 1
Apr 22 23:19:36.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:37.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:38.249: INFO: rc: 1
Apr 22 23:19:38.249: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:38.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:39.338: INFO: rc: 1
Apr 22 23:19:39.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:39.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:41.146: INFO: rc: 1
Apr 22 23:19:41.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:41.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:42.749: INFO: rc: 1
Apr 22 23:19:42.749: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
+ echo hostName
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:43.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:44.164: INFO: rc: 1
Apr 22 23:19:44.164: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:44.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:45.207: INFO: rc: 1
Apr 22 23:19:45.207: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:45.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:46.311: INFO: rc: 1
Apr 22 23:19:46.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:46.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:47.205: INFO: rc: 1
Apr 22 23:19:47.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:47.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:48.335: INFO: rc: 1
Apr 22 23:19:48.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:48.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:49.025: INFO: rc: 1
Apr 22 23:19:49.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:49.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:50.112: INFO: rc: 1
Apr 22 23:19:50.112: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:50.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:51.030: INFO: rc: 1
Apr 22 23:19:51.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:51.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:52.001: INFO: rc: 1
Apr 22 23:19:52.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:52.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:52.983: INFO: rc: 1
Apr 22 23:19:52.983: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:53.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:54.255: INFO: rc: 1
Apr 22 23:19:54.255: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:54.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:54.974: INFO: rc: 1
Apr 22 23:19:54.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:55.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:55.960: INFO: rc: 1
Apr 22 23:19:55.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:56.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:56.976: INFO: rc: 1
Apr 22 23:19:56.977: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:57.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:58.208: INFO: rc: 1
Apr 22 23:19:58.208: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:58.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:19:59.097: INFO: rc: 1
Apr 22 23:19:59.097: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:19:59.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:00.460: INFO: rc: 1
Apr 22 23:20:00.461: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:00.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:00.969: INFO: rc: 1
Apr 22 23:20:00.969: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:01.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:01.981: INFO: rc: 1
Apr 22 23:20:01.981: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:02.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:03.016: INFO: rc: 1
Apr 22 23:20:03.016: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:03.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:04.103: INFO: rc: 1
Apr 22 23:20:04.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:04.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:05.037: INFO: rc: 1
Apr 22 23:20:05.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:05.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:06.023: INFO: rc: 1
Apr 22 23:20:06.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:06.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:06.994: INFO: rc: 1
Apr 22 23:20:06.994: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 32177
+ echo hostName
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:07.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:07.971: INFO: rc: 1
Apr 22 23:20:07.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:08.967: INFO: rc: 1
Apr 22 23:20:08.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:09.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:09.948: INFO: rc: 1
Apr 22 23:20:09.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:10.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:11.001: INFO: rc: 1
Apr 22 23:20:11.001: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:11.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:12.239: INFO: rc: 1
Apr 22 23:20:12.239: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:12.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:13.135: INFO: rc: 1
Apr 22 23:20:13.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:13.966: INFO: rc: 1
Apr 22 23:20:13.966: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:14.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:14.954: INFO: rc: 1
Apr 22 23:20:14.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:15.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:15.984: INFO: rc: 1
Apr 22 23:20:15.984: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:16.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:16.967: INFO: rc: 1
Apr 22 23:20:16.967: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:17.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:18.114: INFO: rc: 1
Apr 22 23:20:18.114: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:18.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:19.228: INFO: rc: 1
Apr 22 23:20:19.228: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:19.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:20.043: INFO: rc: 1
Apr 22 23:20:20.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:20.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:21.025: INFO: rc: 1
Apr 22 23:20:21.025: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:21.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:21.972: INFO: rc: 1
Apr 22 23:20:21.973: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:22.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:22.963: INFO: rc: 1
Apr 22 23:20:22.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:23.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:23.971: INFO: rc: 1
Apr 22 23:20:23.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:24.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:24.965: INFO: rc: 1
Apr 22 23:20:24.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:25.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:25.974: INFO: rc: 1
Apr 22 23:20:25.974: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:26.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:26.954: INFO: rc: 1
Apr 22 23:20:26.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:26.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177'
Apr 22 23:20:27.184: INFO: rc: 1
Apr 22 23:20:27.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7729 exec execpodgxp9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32177:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 32177
nc: connect to 10.10.190.207 port 32177 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Apr 22 23:20:27.185: FAIL: Unexpected error:
    <*errors.errorString | 0xc004c1e260>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32177 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32177 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001460600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001460600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001460600, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Apr 22 23:20:27.186: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-7729".
STEP: Found 17 events.
Apr 22 23:20:27.212: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodgxp9r: { } Scheduled: Successfully assigned services-7729/execpodgxp9r to node1
Apr 22 23:20:27.212: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-mcmjp: { } Scheduled: Successfully assigned services-7729/nodeport-update-service-mcmjp to node2
Apr 22 23:20:27.212: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-n6svj: { } Scheduled: Successfully assigned services-7729/nodeport-update-service-n6svj to node2
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:08 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-n6svj
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:08 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-mcmjp
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:13 +0000 UTC - event for nodeport-update-service-mcmjp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:13 +0000 UTC - event for nodeport-update-service-n6svj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:14 +0000 UTC - event for nodeport-update-service-n6svj: {kubelet node2} Created: Created container nodeport-update-service
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:14 +0000 UTC - event for nodeport-update-service-n6svj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 731.68638ms
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:15 +0000 UTC - event for nodeport-update-service-mcmjp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.630699632s
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:15 +0000 UTC - event for nodeport-update-service-n6svj: {kubelet node2} Started: Started container nodeport-update-service
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:16 +0000 UTC - event for nodeport-update-service-mcmjp: {kubelet node2} Started: Started container nodeport-update-service
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:16 +0000 UTC - event for nodeport-update-service-mcmjp: {kubelet node2} Created: Created container nodeport-update-service
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:22 +0000 UTC - event for execpodgxp9r: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:22 +0000 UTC - event for execpodgxp9r: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 297.639909ms
Apr 22 23:20:27.212: INFO: At 2022-04-22 23:18:23 +0000 UTC - event for execpodgxp9r: {kubelet node1} Started: Started container agnhost-container
Apr 22 23:20:27.213: INFO: At 2022-04-22 23:18:23 +0000 UTC - event for execpodgxp9r: {kubelet node1} Created: Created container agnhost-container
Apr 22 23:20:27.215: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Apr 22 23:20:27.215: INFO: execpodgxp9r                   node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:20 +0000 UTC  }]
Apr 22 23:20:27.215: INFO: nodeport-update-service-mcmjp  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  }]
Apr 22 23:20:27.215: INFO: nodeport-update-service-n6svj  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:08 +0000 UTC  }]
Apr 22 23:20:27.215: INFO: 
Apr 22 23:20:27.220: INFO: 
Logging node info for node master1
Apr 22 23:20:27.223: INFO: Node Info: &Node{ObjectMeta:{master1    70710064-7222-41b1-b51e-81deaa6e7014 75228 0 2022-04-22 19:56:45 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:27.223: INFO: 
Logging kubelet events for node master1
Apr 22 23:20:27.226: INFO: 
Logging pods the kubelet thinks is on node master1
Apr 22 23:20:27.248: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:27.248: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:27.248: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:20:27.248: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container autoscaler ready: true, restart count 2
Apr 22 23:20:27.248: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container docker-registry ready: true, restart count 0
Apr 22 23:20:27.248: INFO: 	Container nginx ready: true, restart count 0
Apr 22 23:20:27.248: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.248: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:27.248: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:20:27.248: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:27.248: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.248: INFO: 	Container prometheus-operator ready: true, restart count 0
Apr 22 23:20:27.248: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-scheduler ready: true, restart count 0
Apr 22 23:20:27.248: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.248: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:27.345: INFO: 
Latency metrics for node master1
Apr 22 23:20:27.345: INFO: 
Logging node info for node master2
Apr 22 23:20:27.348: INFO: Node Info: &Node{ObjectMeta:{master2    4a346a45-ed0b-49d9-a2ad-b419d2c4705c 75110 0 2022-04-22 19:57:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:27.348: INFO: 
Logging kubelet events for node master2
Apr 22 23:20:27.351: INFO: 
Logging pods the kubelet thinks is on node master2
Apr 22 23:20:27.361: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-scheduler ready: true, restart count 1
Apr 22 23:20:27.361: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:27.361: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:20:27.361: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:27.361: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:20:27.361: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:27.361: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:20:27.361: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:27.361: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container nfd-controller ready: true, restart count 0
Apr 22 23:20:27.361: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.361: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.361: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:27.439: INFO: 
Latency metrics for node master2
Apr 22 23:20:27.439: INFO: 
Logging node info for node master3
Apr 22 23:20:27.442: INFO: Node Info: &Node{ObjectMeta:{master3    43c25e47-7b5c-4cf0-863e-39d16b72dcb3 75113 0 2022-04-22 19:57:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:19 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:27.442: INFO: 
Logging kubelet events for node master3
Apr 22 23:20:27.445: INFO: 
Logging pods the kubelet thinks is on node master3
Apr 22 23:20:27.453: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:27.453: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:20:27.453: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:27.453: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:20:27.453: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.453: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:27.453: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-proxy ready: true, restart count 1
Apr 22 23:20:27.453: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-controller-manager ready: true, restart count 3
Apr 22 23:20:27.453: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-scheduler ready: true, restart count 2
Apr 22 23:20:27.453: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.453: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:27.546: INFO: 
Latency metrics for node master3
Apr 22 23:20:27.546: INFO: 
Logging node info for node node1
Apr 22 23:20:27.549: INFO: Node Info: &Node{ObjectMeta:{node1    e0ec3d42-4e2e-47e3-b369-98011b25b39b 75230 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-04-22 22:25:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-04-22 22:25:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:24 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:27.550: INFO: 
Logging kubelet events for node node1
Apr 22 23:20:27.552: INFO: 
Logging pods the kubelet thinks is on node node1
Apr 22 23:20:27.571: INFO: up-down-3-lt5qv started at 2022-04-22 23:19:25 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container up-down-3 ready: false, restart count 0
Apr 22 23:20:27.571: INFO: iperf2-server-deployment-59979d877-grx42 started at 2022-04-22 23:19:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container iperf2-server ready: false, restart count 0
Apr 22 23:20:27.571: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:27.571: INFO: execpodgxp9r started at 2022-04-22 23:18:20 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:20:27.571: INFO: netserver-0 started at 2022-04-22 23:20:10 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:20:27.571: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:20:27.571: INFO: service-headless-toggled-hvksn started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:27.571: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:20:27.571: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container config-reloader ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container grafana ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container prometheus ready: true, restart count 1
Apr 22 23:20:27.571: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.571: INFO: e2e-net-exec started at 2022-04-22 23:19:36 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container e2e-net-exec ready: true, restart count 0
Apr 22 23:20:27.571: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 22 23:20:27.571: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Apr 22 23:20:27.571: INFO: up-down-2-lt6q4 started at 2022-04-22 23:18:17 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container up-down-2 ready: false, restart count 0
Apr 22 23:20:27.571: INFO: up-down-3-nxxpb started at 2022-04-22 23:19:25 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container up-down-3 ready: false, restart count 0
Apr 22 23:20:27.571: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:27.571: INFO: test-container-pod started at 2022-04-22 23:20:18 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.571: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Init container install-cni ready: true, restart count 2
Apr 22 23:20:27.571: INFO: 	Container kube-flannel ready: true, restart count 3
Apr 22 23:20:27.571: INFO: host-test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:20:27.571: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container tas-extender ready: true, restart count 0
Apr 22 23:20:27.571: INFO: netserver-0 started at 2022-04-22 23:19:56 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.571: INFO: up-down-3-pg45m started at 2022-04-22 23:19:25 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container up-down-3 ready: false, restart count 0
Apr 22 23:20:27.571: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:20:27.571: INFO: 	Container init ready: false, restart count 0
Apr 22 23:20:27.571: INFO: 	Container install ready: false, restart count 0
Apr 22 23:20:27.571: INFO: netserver-0 started at 2022-04-22 23:19:58 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.571: INFO: verify-service-down-host-exec-pod started at  (0+0 container statuses recorded)
Apr 22 23:20:27.571: INFO: iperf2-clients-mnhr6 started at 2022-04-22 23:19:34 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container iperf2-client ready: false, restart count 0
Apr 22 23:20:27.571: INFO: netserver-0 started at 2022-04-22 23:18:48 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.571: INFO: startup-script started at 2022-04-22 23:19:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container startup-script ready: true, restart count 0
Apr 22 23:20:27.571: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:20:27.571: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.571: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.571: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:27.910: INFO: 
Latency metrics for node node1
Apr 22 23:20:27.911: INFO: 
Logging node info for node node2
Apr 22 23:20:27.915: INFO: Node Info: &Node{ObjectMeta:{node2    ef89f5d1-0c69-4be8-a041-8437402ef215 75235 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 22:25:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-22 22:42:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:26 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:26 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:26 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:26 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:27.916: INFO: 
Logging kubelet events for node node2
Apr 22 23:20:27.918: INFO: 
Logging pods the kubelet thinks is on node node2
Apr 22 23:20:27.934: INFO: service-headless-toggled-cgxvz started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:27.934: INFO: service-headless-cd878 started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:27.934: INFO: netserver-1 started at 2022-04-22 23:19:56 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.934: INFO: test-container-pod started at 2022-04-22 23:20:19 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.934: INFO: service-headless-brw2z started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:27.934: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:27.934: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:20:27.934: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:20:27.934: INFO: 	Container init ready: false, restart count 0
Apr 22 23:20:27.934: INFO: 	Container install ready: false, restart count 0
Apr 22 23:20:27.934: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:27.934: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:27.934: INFO: service-headless-toggled-5n5nh started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:27.934: INFO: netserver-1 started at 2022-04-22 23:20:10 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container webserver ready: false, restart count 0
Apr 22 23:20:27.934: INFO: test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.934: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:27.934: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Apr 22 23:20:27.934: INFO: nodeport-update-service-n6svj started at 2022-04-22 23:18:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 22 23:20:27.934: INFO: netserver-1 started at 2022-04-22 23:18:47 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.934: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.934: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:20:27.935: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:20:27.935: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:20:27.935: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container cmk-webhook ready: true, restart count 0
Apr 22 23:20:27.935: INFO: nodeport-update-service-mcmjp started at 2022-04-22 23:18:08 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container nodeport-update-service ready: true, restart count 0
Apr 22 23:20:27.935: INFO: host-test-container-pod started at 2022-04-22 23:20:18 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:20:27.935: INFO: service-headless-ppdkh started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:27.935: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container nginx-proxy ready: true, restart count 1
Apr 22 23:20:27.935: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:20:27.935: INFO: netserver-1 started at 2022-04-22 23:19:57 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:27.935: INFO: boom-server started at 2022-04-22 23:18:59 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container boom-server ready: true, restart count 0
Apr 22 23:20:27.935: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:27.935: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:27.935: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:20:27.935: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:20:27.935: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:20:29.898: INFO: 
Latency metrics for node node2
Apr 22 23:20:29.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7729" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [141.081 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Apr 22 23:20:27.185: Unexpected error:
      <*errors.errorString | 0xc004c1e260>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32177 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32177 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":0,"skipped":158,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Apr 22 23:20:29.915: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:20:10.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-3393
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:20:10.535: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:20:10.568: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:12.571: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:14.574: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:16.571: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:18.572: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:20.572: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:22.572: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:24.572: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:26.572: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:28.573: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:30.574: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:20:32.573: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:20:32.578: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:20:36.599: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:20:36.599: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:20:36.606: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:36.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3393" for this suite.


S [SKIPPING] [26.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Apr 22 23:20:36.620: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:18:47.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-3967
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 22 23:18:47.799: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Apr 22 23:18:47.827: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:49.831: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:51.830: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:18:53.831: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:55.833: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:57.832: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:18:59.830: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:01.831: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:03.833: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:05.832: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:07.831: INFO: The status of Pod netserver-0 is Running (Ready = false)
Apr 22 23:19:09.832: INFO: The status of Pod netserver-0 is Running (Ready = true)
Apr 22 23:19:09.836: INFO: The status of Pod netserver-1 is Running (Ready = false)
Apr 22 23:19:11.840: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Apr 22 23:19:17.873: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Apr 22 23:19:17.873: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Apr 22 23:19:17.896: INFO: Service node-port-service in namespace nettest-3967 found.
Apr 22 23:19:17.911: INFO: Service session-affinity-service in namespace nettest-3967 found.
STEP: Waiting for NodePort service to expose endpoint
Apr 22 23:19:18.913: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Apr 22 23:19:19.916: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 10.10.190.207 (node) --> 10.233.26.50:90 (config.clusterIP)
Apr 22 23:19:19.919: INFO: Going to poll 10.233.26.50 on port 90 at least 0 times, with a maximum of 34 tries before failing
Apr 22 23:19:19.922: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.26.50 90 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:19.922: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:21.008: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Apr 22 23:19:23.013: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.233.26.50 90 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:23.014: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:24.111: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: dialing(udp) 10.10.190.207 (node) --> 10.10.190.207:31454 (nodeIP)
Apr 22 23:19:24.111: INFO: Going to poll 10.10.190.207 on port 31454 at least 0 times, with a maximum of 34 tries before failing
Apr 22 23:19:24.114: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:24.114: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:24.199: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:24.199: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:26.205: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:26.205: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:26.462: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:26.462: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:28.465: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:28.465: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:29.321: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:29.321: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:31.325: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:31.325: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:31.428: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:31.428: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:33.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:33.432: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:33.570: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:33.570: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:35.575: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:35.575: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:36.600: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:36.600: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:38.604: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:38.604: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:39.274: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:39.274: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:41.277: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:41.277: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:42.552: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:42.553: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:44.557: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:44.557: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:44.804: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:44.804: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:46.808: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:46.808: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:47.184: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:47.184: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:49.188: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:49.188: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:49.398: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:49.398: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:51.403: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:51.403: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:51.508: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:51.508: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:53.512: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:53.512: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:53.737: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:53.737: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:55.743: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:55.743: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:55.826: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:55.826: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:19:57.829: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:19:57.829: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:19:58.201: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:19:58.201: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:00.205: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:00.205: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:00.460: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:00.460: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:02.464: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:02.464: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:02.622: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:02.622: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:04.625: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:04.625: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:04.841: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:04.842: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:06.845: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:06.845: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:06.965: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:06.965: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:08.969: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:08.969: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:09.057: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:09.057: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:11.061: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:11.061: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:11.270: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:11.270: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:13.273: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:13.273: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:13.415: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:13.415: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:15.419: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:15.419: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:15.526: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:15.526: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:17.529: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:17.529: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:17.689: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:17.689: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:19.692: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:19.692: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:19.783: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:19.783: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:21.790: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:21.790: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:21.871: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:21.871: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:23.876: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:23.876: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:23.967: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:23.967: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:25.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:25.971: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:26.055: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:26.055: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:28.059: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:28.059: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:28.156: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:28.156: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:30.160: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:30.160: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:30.398: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:30.399: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:32.404: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:32.404: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:32.486: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:32.486: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:34.493: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:34.493: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:34.578: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:34.578: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:36.583: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:36.583: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:36.664: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:36.665: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:38.669: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\s*$'] Namespace:nettest-3967 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Apr 22 23:20:38.669: INFO: >>> kubeConfig: /root/.kube/config
Apr 22 23:20:38.747: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.10.190.207 31454 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Apr 22 23:20:38.747: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Apr 22 23:20:40.749: INFO: 
Output of kubectl describe pod nettest-3967/netserver-0:

Apr 22 23:20:40.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3967 describe pod netserver-0 --namespace=nettest-3967'
Apr 22 23:20:40.941: INFO: stderr: ""
Apr 22 23:20:40.941: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-3967\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Fri, 22 Apr 2022 23:18:48 +0000\nLabels:       selector-f1af9854-cc6e-430d-ae07-6c16bf68ac9a=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.101\"\n                    ],\n                    \"mac\": \"f6:32:bc:22:2f:d4\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.3.101\"\n                    ],\n                    \"mac\": \"f6:32:bc:22:2f:d4\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.3.101\nIPs:\n  IP:  10.244.3.101\nContainers:\n  webserver:\n    Container ID:  docker://58f24411730e2b26c30b63a9d8177f2acd8ef86b9b4e31e32ac94e8b99b3ad5a\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 22 Apr 2022 23:18:51 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5zcp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-h5zcp:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  113s  default-scheduler  Successfully assigned nettest-3967/netserver-0 to node1\n  Normal  Pulling    110s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     110s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 563.744429ms\n  Normal  Created    109s  kubelet            Created container webserver\n  Normal  Started    109s  kubelet            Started container webserver\n"
Apr 22 23:20:40.942: INFO: Name:         netserver-0
Namespace:    nettest-3967
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Fri, 22 Apr 2022 23:18:48 +0000
Labels:       selector-f1af9854-cc6e-430d-ae07-6c16bf68ac9a=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.101"
                    ],
                    "mac": "f6:32:bc:22:2f:d4",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.3.101"
                    ],
                    "mac": "f6:32:bc:22:2f:d4",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.3.101
IPs:
  IP:  10.244.3.101
Containers:
  webserver:
    Container ID:  docker://58f24411730e2b26c30b63a9d8177f2acd8ef86b9b4e31e32ac94e8b99b3ad5a
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 22 Apr 2022 23:18:51 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5zcp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-h5zcp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  113s  default-scheduler  Successfully assigned nettest-3967/netserver-0 to node1
  Normal  Pulling    110s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     110s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 563.744429ms
  Normal  Created    109s  kubelet            Created container webserver
  Normal  Started    109s  kubelet            Started container webserver

Apr 22 23:20:40.942: INFO: 
Output of kubectl describe pod nettest-3967/netserver-1:

Apr 22 23:20:40.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-3967 describe pod netserver-1 --namespace=nettest-3967'
Apr 22 23:20:41.124: INFO: stderr: ""
Apr 22 23:20:41.124: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-3967\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Fri, 22 Apr 2022 23:18:47 +0000\nLabels:       selector-f1af9854-cc6e-430d-ae07-6c16bf68ac9a=true\nAnnotations:  k8s.v1.cni.cncf.io/network-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.93\"\n                    ],\n                    \"mac\": \"2e:95:8e:24:f8:c1\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.93\"\n                    ],\n                    \"mac\": \"2e:95:8e:24:f8:c1\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\n              kubernetes.io/psp: collectd\nStatus:       Running\nIP:           10.244.4.93\nIPs:\n  IP:  10.244.4.93\nContainers:\n  webserver:\n    Container ID:  docker://ba4e743d0219a5289724ec4d32e2b99eefc6fe6beac8fe58226b170ef9bc9afa\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    0/TCP, 0/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n    State:          Running\n      Started:      Fri, 22 Apr 2022 23:18:54 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vz6wj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-vz6wj:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  113s  default-scheduler  Successfully assigned nettest-3967/netserver-1 to node2\n  Normal  Pulling    109s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     109s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 298.819229ms\n  Normal  Created    108s  kubelet            Created container webserver\n  Normal  Started    107s  kubelet            Started container webserver\n"
Apr 22 23:20:41.124: INFO: Name:         netserver-1
Namespace:    nettest-3967
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Fri, 22 Apr 2022 23:18:47 +0000
Labels:       selector-f1af9854-cc6e-430d-ae07-6c16bf68ac9a=true
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.93"
                    ],
                    "mac": "2e:95:8e:24:f8:c1",
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "default-cni-network",
                    "interface": "eth0",
                    "ips": [
                        "10.244.4.93"
                    ],
                    "mac": "2e:95:8e:24:f8:c1",
                    "default": true,
                    "dns": {}
                }]
              kubernetes.io/psp: collectd
Status:       Running
IP:           10.244.4.93
IPs:
  IP:  10.244.4.93
Containers:
  webserver:
    Container ID:  docker://ba4e743d0219a5289724ec4d32e2b99eefc6fe6beac8fe58226b170ef9bc9afa
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    0/TCP, 0/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
    State:          Running
      Started:      Fri, 22 Apr 2022 23:18:54 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vz6wj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-vz6wj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  113s  default-scheduler  Successfully assigned nettest-3967/netserver-1 to node2
  Normal  Pulling    109s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     109s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 298.819229ms
  Normal  Created    108s  kubelet            Created container webserver
  Normal  Started    107s  kubelet            Started container webserver

Apr 22 23:20:41.125: FAIL: failed dialing endpoint, failed to find expected endpoints, 
tries 34
Command echo hostName | nc -w 1 -u 10.10.190.207 31454
retrieved map[]
expected map[netserver-0:{} netserver-1:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0036b5b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0036b5b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0036b5b00, 0x70f99e8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-3967".
STEP: Found 20 events.
Apr 22 23:20:41.130: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for host-test-container-pod: { } Scheduled: Successfully assigned nettest-3967/host-test-container-pod to node1
Apr 22 23:20:41.130: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-0: { } Scheduled: Successfully assigned nettest-3967/netserver-0 to node1
Apr 22 23:20:41.130: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for netserver-1: { } Scheduled: Successfully assigned nettest-3967/netserver-1 to node2
Apr 22 23:20:41.130: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for test-container-pod: { } Scheduled: Successfully assigned nettest-3967/test-container-pod to node2
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:50 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:50 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 563.744429ms
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:51 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:51 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:52 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 298.819229ms
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:53 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:18:54 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:12 +0000 UTC - event for host-test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 286.891331ms
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:13 +0000 UTC - event for host-test-container-pod: {kubelet node1} Created: Created container agnhost-container
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:13 +0000 UTC - event for host-test-container-pod: {kubelet node1} Started: Started container agnhost-container
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:15 +0000 UTC - event for test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:15 +0000 UTC - event for test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 308.976779ms
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:15 +0000 UTC - event for test-container-pod: {kubelet node2} Created: Created container webserver
Apr 22 23:20:41.130: INFO: At 2022-04-22 23:19:16 +0000 UTC - event for test-container-pod: {kubelet node2} Started: Started container webserver
Apr 22 23:20:41.134: INFO: POD                      NODE   PHASE    GRACE  CONDITIONS
Apr 22 23:20:41.134: INFO: host-test-container-pod  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:11 +0000 UTC  }]
Apr 22 23:20:41.134: INFO: netserver-0              node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:47 +0000 UTC  }]
Apr 22 23:20:41.134: INFO: netserver-1              node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:18:47 +0000 UTC  }]
Apr 22 23:20:41.134: INFO: test-container-pod       node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 23:19:11 +0000 UTC  }]
Apr 22 23:20:41.134: INFO: 
Apr 22 23:20:41.138: INFO: 
Logging node info for node master1
Apr 22 23:20:41.141: INFO: Node Info: &Node{ObjectMeta:{master1    70710064-7222-41b1-b51e-81deaa6e7014 75369 0 2022-04-22 19:56:45 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:56:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-04-22 20:04:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:56:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:59:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:025a90e4dec046189b065fcf68380be7,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:7e907077-ed98-4d46-8305-29673eaf3bf3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:41.141: INFO: 
Logging kubelet events for node master1
Apr 22 23:20:41.144: INFO: 
Logging pods the kubelet thinks is on node master1
Apr 22 23:20:41.153: INFO: kube-apiserver-master1 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:41.153: INFO: kube-controller-manager-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:20:41.153: INFO: kube-multus-ds-amd64-px448 started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:41.153: INFO: prometheus-operator-585ccfb458-zsrdh started at 2022-04-22 20:13:26 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.153: INFO: 	Container prometheus-operator ready: true, restart count 0
Apr 22 23:20:41.153: INFO: kube-scheduler-master1 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-scheduler ready: true, restart count 0
Apr 22 23:20:41.153: INFO: node-exporter-b7qpl started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.153: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:41.153: INFO: kube-proxy-hfgsd started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:41.153: INFO: kube-flannel-6vhmq started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:41.153: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:20:41.153: INFO: dns-autoscaler-7df78bfcfb-smkxp started at 2022-04-22 20:00:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.153: INFO: 	Container autoscaler ready: true, restart count 2
Apr 22 23:20:41.154: INFO: container-registry-65d7c44b96-7r6xc started at 2022-04-22 20:04:24 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.154: INFO: 	Container docker-registry ready: true, restart count 0
Apr 22 23:20:41.154: INFO: 	Container nginx ready: true, restart count 0
Apr 22 23:20:41.249: INFO: 
Latency metrics for node master1
Apr 22 23:20:41.249: INFO: 
Logging node info for node master2
Apr 22 23:20:41.251: INFO: Node Info: &Node{ObjectMeta:{master2    4a346a45-ed0b-49d9-a2ad-b419d2c4705c 75450 0 2022-04-22 19:57:16 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-04-22 19:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2022-04-22 20:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-04-22 20:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:39 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:39 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:39 +0000 UTC,LastTransitionTime:2022-04-22 19:57:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:39 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a68fd05f71b4f40ab5ab92028e707cc,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:45292226-7389-4aa9-8a98-33e443731d14,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:41.251: INFO: 
Logging kubelet events for node master2
Apr 22 23:20:41.254: INFO: 
Logging pods the kubelet thinks is on node master2
Apr 22 23:20:41.261: INFO: kube-apiserver-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:41.261: INFO: kube-controller-manager-master2 started at 2022-04-22 19:57:55 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-controller-manager ready: true, restart count 2
Apr 22 23:20:41.261: INFO: kube-proxy-df6vx started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:41.261: INFO: node-feature-discovery-controller-cff799f9f-jfpb6 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container nfd-controller ready: true, restart count 0
Apr 22 23:20:41.261: INFO: node-exporter-4tbfp started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.261: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:41.261: INFO: kube-scheduler-master2 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-scheduler ready: true, restart count 1
Apr 22 23:20:41.261: INFO: kube-flannel-jlvdn started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:41.261: INFO: 	Container kube-flannel ready: true, restart count 1
Apr 22 23:20:41.261: INFO: kube-multus-ds-amd64-7hw9v started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:41.261: INFO: coredns-8474476ff8-fhb42 started at 2022-04-22 20:00:09 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.261: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:20:41.340: INFO: 
Latency metrics for node master2
Apr 22 23:20:41.340: INFO: 
Logging node info for node master3
Apr 22 23:20:41.343: INFO: Node Info: &Node{ObjectMeta:{master3    43c25e47-7b5c-4cf0-863e-39d16b72dcb3 75452 0 2022-04-22 19:57:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2022-04-22 19:57:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-04-22 19:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2022-04-22 20:11:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:32 +0000 UTC,LastTransitionTime:2022-04-22 20:02:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:40 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:40 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:40 +0000 UTC,LastTransitionTime:2022-04-22 19:57:26 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:40 +0000 UTC,LastTransitionTime:2022-04-22 19:59:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e38c1766e8048fab7e120a1bdaf206c,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7266f836-7ba1-4d9b-9691-d8344ab173f1,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:41.343: INFO: 
Logging kubelet events for node master3
Apr 22 23:20:41.346: INFO: 
Logging pods the kubelet thinks is on node master3
Apr 22 23:20:41.355: INFO: kube-apiserver-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-apiserver ready: true, restart count 0
Apr 22 23:20:41.355: INFO: kube-controller-manager-master3 started at 2022-04-22 19:57:27 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-controller-manager ready: true, restart count 3
Apr 22 23:20:41.355: INFO: kube-scheduler-master3 started at 2022-04-22 20:06:28 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-scheduler ready: true, restart count 2
Apr 22 23:20:41.355: INFO: kube-proxy-z9q2t started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-proxy ready: true, restart count 1
Apr 22 23:20:41.355: INFO: kube-flannel-6jkw9 started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:41.355: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:20:41.355: INFO: kube-multus-ds-amd64-tlrjm started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:41.355: INFO: coredns-8474476ff8-fdcj7 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container coredns ready: true, restart count 1
Apr 22 23:20:41.355: INFO: node-exporter-tnqsz started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.355: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.355: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:41.441: INFO: 
Latency metrics for node master3
Apr 22 23:20:41.441: INFO: 
Logging node info for node node1
Apr 22 23:20:41.444: INFO: Node Info: &Node{ObjectMeta:{node1    e0ec3d42-4e2e-47e3-b369-98011b25b39b 75372 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:11:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2022-04-22 22:25:16 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2022-04-22 22:25:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:29 +0000 UTC,LastTransitionTime:2022-04-22 20:02:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:34 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cb8bd90647b418e9defe4fbcf1e6b5b,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:bd49e3f7-3bce-4d4e-8596-432fc9a7c1c3,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:60182103,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:47f8ebd32249a09f532409c6412ae16c6ad4ad6e8075e218c81c65cc0fe46deb localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:41.445: INFO: 
Logging kubelet events for node node1
Apr 22 23:20:41.449: INFO: 
Logging pods the kubelet thinks is on node node1
Apr 22 23:20:41.469: INFO: node-exporter-9zzfv started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.469: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.469: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:41.470: INFO: test-container-pod started at 2022-04-22 23:20:32 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.470: INFO: netserver-0 started at 2022-04-22 23:18:48 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.470: INFO: startup-script started at 2022-04-22 23:19:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container startup-script ready: true, restart count 0
Apr 22 23:20:41.470: INFO: node-feature-discovery-worker-2hkr5 started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:20:41.470: INFO: kube-multus-ds-amd64-x8jqs started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:41.470: INFO: service-headless-toggled-hvksn started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:41.470: INFO: netserver-0 started at 2022-04-22 23:20:10 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.470: INFO: cmk-2vd7z started at 2022-04-22 20:12:29 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:20:41.470: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Apr 22 23:20:41.470: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:20:41.470: INFO: prometheus-k8s-0 started at 2022-04-22 20:13:52 +0000 UTC (0+4 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container config-reloader ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container grafana ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container prometheus ready: true, restart count 1
Apr 22 23:20:41.470: INFO: collectd-g2c8k started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:41.470: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:20:41.470: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.470: INFO: nginx-proxy-node1 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.471: INFO: 	Container nginx-proxy ready: true, restart count 2
Apr 22 23:20:41.472: INFO: kube-proxy-v8fdh started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.472: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:41.472: INFO: host-test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.472: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:20:41.472: INFO: kube-flannel-l4rjs started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:41.472: INFO: 	Init container install-cni ready: true, restart count 2
Apr 22 23:20:41.472: INFO: 	Container kube-flannel ready: true, restart count 3
Apr 22 23:20:41.472: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g started at 2022-04-22 20:16:40 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.472: INFO: 	Container tas-extender ready: true, restart count 0
Apr 22 23:20:41.472: INFO: cmk-init-discover-node1-7s78z started at 2022-04-22 20:11:46 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:41.472: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:20:41.472: INFO: 	Container init ready: false, restart count 0
Apr 22 23:20:41.472: INFO: 	Container install ready: false, restart count 0
Apr 22 23:20:41.704: INFO: 
Latency metrics for node node1
Apr 22 23:20:41.704: INFO: 
Logging node info for node node2
Apr 22 23:20:41.707: INFO: Node Info: &Node{ObjectMeta:{node2    ef89f5d1-0c69-4be8-a041-8437402ef215 75415 0 2022-04-22 19:58:33 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-04-22 19:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-04-22 19:59:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-04-22 20:08:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-04-22 20:12:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-04-22 22:25:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2022-04-22 22:42:49 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-04-22 20:02:30 +0000 UTC,LastTransitionTime:2022-04-22 20:02:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:36 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:36 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-04-22 23:20:36 +0000 UTC,LastTransitionTime:2022-04-22 19:58:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-04-22 23:20:36 +0000 UTC,LastTransitionTime:2022-04-22 19:59:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5e6f6d1644f942b881dbf2d9722ff85b,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:cc218e06-beff-411d-b91e-f4a272d9c83f,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.14,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:3abd88f9582d6c6aa3a8d632acfc2025ecdd675591624e74704115e666022eb7 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:fc94db7f14c5544fb3407ca9c8af2658c9ff8983716baaf93d5654ac2393b7ec localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Apr 22 23:20:41.708: INFO: 
Logging kubelet events for node node2
Apr 22 23:20:41.711: INFO: 
Logging pods the kubelet thinks is on node node2
Apr 22 23:20:41.828: INFO: kube-proxy-jvkvz started at 2022-04-22 19:58:37 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container kube-proxy ready: true, restart count 2
Apr 22 23:20:41.828: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 started at 2022-04-22 20:00:14 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Apr 22 23:20:41.828: INFO: verify-service-up-exec-pod-fq5qz started at 2022-04-22 23:20:40 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container agnhost-container ready: false, restart count 0
Apr 22 23:20:41.828: INFO: netserver-1 started at 2022-04-22 23:18:47 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.828: INFO: test-container-pod started at 2022-04-22 23:19:11 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.828: INFO: node-feature-discovery-worker-bktph started at 2022-04-22 20:08:13 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container nfd-worker ready: true, restart count 0
Apr 22 23:20:41.828: INFO: cmk-vdkxb started at 2022-04-22 20:12:30 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container nodereport ready: true, restart count 0
Apr 22 23:20:41.828: INFO: 	Container reconcile ready: true, restart count 0
Apr 22 23:20:41.828: INFO: cmk-webhook-6c9d5f8578-nmxns started at 2022-04-22 20:12:30 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container cmk-webhook ready: true, restart count 0
Apr 22 23:20:41.828: INFO: service-headless-ppdkh started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.828: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:41.828: INFO: nginx-proxy-node2 started at 2022-04-22 19:58:33 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container nginx-proxy ready: true, restart count 1
Apr 22 23:20:41.829: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd started at 2022-04-22 20:09:26 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container kube-sriovdp ready: true, restart count 0
Apr 22 23:20:41.829: INFO: kube-multus-ds-amd64-kjrqq started at 2022-04-22 19:59:42 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container kube-multus ready: true, restart count 1
Apr 22 23:20:41.829: INFO: collectd-ptpbz started at 2022-04-22 20:17:31 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container collectd ready: true, restart count 0
Apr 22 23:20:41.829: INFO: 	Container collectd-exporter ready: true, restart count 0
Apr 22 23:20:41.829: INFO: 	Container rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.829: INFO: verify-service-up-host-exec-pod started at 2022-04-22 23:20:38 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container agnhost-container ready: true, restart count 0
Apr 22 23:20:41.829: INFO: service-headless-toggled-cgxvz started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:41.829: INFO: service-headless-cd878 started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:41.829: INFO: kube-flannel-2kskh started at 2022-04-22 19:59:33 +0000 UTC (1+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Init container install-cni ready: true, restart count 0
Apr 22 23:20:41.829: INFO: 	Container kube-flannel ready: true, restart count 2
Apr 22 23:20:41.829: INFO: cmk-init-discover-node2-2m4dr started at 2022-04-22 20:12:06 +0000 UTC (0+3 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container discover ready: false, restart count 0
Apr 22 23:20:41.829: INFO: 	Container init ready: false, restart count 0
Apr 22 23:20:41.829: INFO: 	Container install ready: false, restart count 0
Apr 22 23:20:41.829: INFO: node-exporter-c4bhs started at 2022-04-22 20:13:34 +0000 UTC (0+2 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Apr 22 23:20:41.829: INFO: 	Container node-exporter ready: true, restart count 0
Apr 22 23:20:41.829: INFO: service-headless-toggled-5n5nh started at 2022-04-22 23:19:53 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container service-headless-toggled ready: true, restart count 0
Apr 22 23:20:41.829: INFO: netserver-1 started at 2022-04-22 23:20:10 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container webserver ready: true, restart count 0
Apr 22 23:20:41.829: INFO: service-headless-brw2z started at 2022-04-22 23:19:41 +0000 UTC (0+1 container statuses recorded)
Apr 22 23:20:41.829: INFO: 	Container service-headless ready: true, restart count 0
Apr 22 23:20:42.728: INFO: 
Latency metrics for node node2
Apr 22 23:20:42.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3967" for this suite.


• Failure [115.053 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Apr 22 23:20:41.125: failed dialing endpoint, failed to find expected endpoints, 
    tries 34
    Command echo hostName | nc -w 1 -u 10.10.190.207 31454
    retrieved map[]
    expected map[netserver-0:{} netserver-1:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Apr 22 23:19:41.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-5903
STEP: creating service service-headless in namespace services-5903
STEP: creating replication controller service-headless in namespace services-5903
I0422 23:19:41.704889      39 runners.go:190] Created replication controller with name: service-headless, namespace: services-5903, replica count: 3
I0422 23:19:44.756691      39 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:47.757632      39 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:50.758497      39 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:53.759845      39 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-5903
STEP: creating service service-headless-toggled in namespace services-5903
STEP: creating replication controller service-headless-toggled in namespace services-5903
I0422 23:19:53.774226      39 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-5903, replica count: 3
I0422 23:19:56.825466      39 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:19:59.825791      39 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0422 23:20:02.826849      39 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Apr 22 23:20:02.829: INFO: Creating new host exec pod
Apr 22 23:20:02.861: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:04.864: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:06.865: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:20:06.865: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:20:14.888: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done" in pod services-5903/verify-service-up-host-exec-pod
Apr 22 23:20:14.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done'
Apr 22 23:20:15.282: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n"
Apr 22 23:20:15.283: INFO: stdout: "service-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\n"
Apr 22 23:20:15.283: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done" in pod services-5903/verify-service-up-exec-pod-nw674
Apr 22 23:20:15.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-up-exec-pod-nw674 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done'
Apr 22 23:20:15.825: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n"
Apr 22 23:20:15.825: INFO: stdout: "service-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-5903
STEP: Deleting pod verify-service-up-exec-pod-nw674 in namespace services-5903
STEP: verifying service-headless is not up
Apr 22 23:20:15.838: INFO: Creating new host exec pod
Apr 22 23:20:15.852: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:17.856: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:20:17.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed'
Apr 22 23:20:20.141: INFO: rc: 28
Apr 22 23:20:20.141: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed" in pod services-5903/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.18.216:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5903
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Apr 22 23:20:20.157: INFO: Creating new host exec pod
Apr 22 23:20:20.167: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:22.171: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:24.170: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:26.171: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:28.171: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:30.172: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:32.171: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:34.171: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:36.172: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:20:36.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.254:80 && echo service-down-failed'
Apr 22 23:20:38.429: INFO: rc: 28
Apr 22 23:20:38.429: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.36.254:80 && echo service-down-failed" in pod services-5903/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.36.254:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.36.254:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5903
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Apr 22 23:20:38.443: INFO: Creating new host exec pod
Apr 22 23:20:38.455: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:40.462: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Apr 22 23:20:40.462: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Apr 22 23:20:44.484: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done" in pod services-5903/verify-service-up-host-exec-pod
Apr 22 23:20:44.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done'
Apr 22 23:20:44.865: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n"
Apr 22 23:20:44.866: INFO: stdout: "service-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\n"
Apr 22 23:20:44.866: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done" in pod services-5903/verify-service-up-exec-pod-fq5qz
Apr 22 23:20:44.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-up-exec-pod-fq5qz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.36.254:80 2>&1 || true; echo; done'
Apr 22 23:20:45.205: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.36.254:80\n+ echo\n"
Apr 22 23:20:45.205: INFO: stdout: "service-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-5n5nh\nservice-headless-toggled-cgxvz\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-hvksn\nservice-headless-toggled-5n5nh\nservice-headless-toggled-hvksn\nservice-headless-toggled-cgxvz\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-5903
STEP: Deleting pod verify-service-up-exec-pod-fq5qz in namespace services-5903
STEP: verifying service-headless is still not up
Apr 22 23:20:45.221: INFO: Creating new host exec pod
Apr 22 23:20:45.235: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Apr 22 23:20:47.239: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Apr 22 23:20:47.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed'
Apr 22 23:20:49.485: INFO: rc: 28
Apr 22 23:20:49.485: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed" in pod services-5903/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-5903 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.18.216:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.18.216:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-5903
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Apr 22 23:20:49.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5903" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:67.826 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":4,"skipped":632,"failed":0}
Apr 22 23:20:49.507: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","total":-1,"completed":3,"skipped":684,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for node-Service: udp"]}
Apr 22 23:20:42.745: INFO: Running AfterSuite actions on all nodes
Apr 22 23:20:49.572: INFO: Running AfterSuite actions on node 1
Apr 22 23:20:49.572: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for node-Service: udp 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 28 of 5773 Specs in 161.602 seconds
FAIL! -- 25 Passed | 3 Failed | 0 Pending | 5745 Skipped


Ginkgo ran 1 suite in 2m43.390505617s
Test Suite Failed