Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636775192 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 03:46:34.678: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.679: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 03:46:34.700: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 03:46:34.767: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 03:46:34.767: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 03:46:34.767: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 03:46:34.767: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 03:46:34.767: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 03:46:34.777: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 03:46:34.777: INFO: e2e test version: v1.21.5 Nov 13 03:46:34.777: INFO: kube-apiserver version: v1.21.1 Nov 13 03:46:34.778: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.783: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:46:34.798: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.819: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 03:46:34.797: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.820: INFO: Cluster IP family: ipv4 Nov 13 03:46:34.799: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.820: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:46:34.811: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.832: INFO: Cluster IP family: ipv4 Nov 13 03:46:34.808: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.832: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSS ------------------------------ Nov 13 03:46:34.817: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.839: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Nov 13 03:46:34.819: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.841: INFO: Cluster IP family: ipv4 SSS ------------------------------ Nov 13 03:46:34.821: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.843: INFO: Cluster IP family: ipv4 S ------------------------------ Nov 13 03:46:34.824: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:46:34.844: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:46:35.164348 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.164: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.166: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 STEP: creating service nodeport-range-test with type NodePort in namespace services-7469 STEP: changing service nodeport-range-test to out-of-range NodePort 15470 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 15470 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:35.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7469" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •SSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp W1113 03:46:35.399017 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.399: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.401: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Nov 13 03:46:35.403: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:35.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-6950" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should only target nodes with endpoints [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test W1113 03:46:35.401390 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.401: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.403: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Nov 13 03:46:35.405: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:35.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-9892" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 control plane should not expose well-known ports [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Nov 13 03:46:35.532: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Nov 13 03:46:35.637: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:35.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-2539" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.152 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns W1113 03:46:36.090854 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:36.091: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:36.092: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Nov 13 03:46:36.095: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:36.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-350" for this suite. S [SKIPPING] [0.037 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:34.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:46:34.917249 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:34.917: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:34.921: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-1506 STEP: deleting original service nodeport-reuse Nov 13 03:46:34.940: INFO: Creating new host exec pod Nov 13 03:46:34.955: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:36.958: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:38.958: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:40.958: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:42.960: INFO: The status of Pod hostexec is Running (Ready = true) Nov 13 03:46:42.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1506 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30498' | tail -n +2 | grep LISTEN' Nov 13 03:46:43.675: INFO: stderr: "+ ss -ant46 'sport = :30498'\n+ tail -n +2\n+ grep LISTEN\n" Nov 13 03:46:43.675: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 30498 STEP: deleting service nodeport-reuse in namespace services-1506 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:43.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1506" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:8.818 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:46:35.726209 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.726: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.728: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 STEP: creating service externalip-test with type=clusterIP in namespace services-8081 STEP: creating replication controller externalip-test in namespace services-8081 I1113 03:46:35.740505 37 runners.go:190] Created replication controller with name: externalip-test, namespace: services-8081, replica count: 2 I1113 03:46:38.793318 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:41.793685 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:44.794716 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:47.795317 37 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 03:46:47.795: INFO: Creating new exec pod Nov 13 03:46:54.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8081 exec execpodxx69f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80' Nov 13 03:46:55.072: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalip-test 80\nConnection to externalip-test 80 port [tcp/http] succeeded!\n" Nov 13 03:46:55.072: INFO: stdout: "externalip-test-47fbz" Nov 13 03:46:55.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8081 exec execpodxx69f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.47.255 80' Nov 13 03:46:55.323: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.47.255 80\nConnection to 10.233.47.255 80 port [tcp/http] succeeded!\n" Nov 13 03:46:55.323: INFO: stdout: "externalip-test-6slb5" Nov 13 03:46:55.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8081 exec execpodxx69f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80' Nov 13 03:46:55.572: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n" Nov 13 03:46:55.572: INFO: stdout: "externalip-test-6slb5" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:55.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8081" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:19.876 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 ------------------------------ {"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":1,"skipped":352,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:55.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 03:46:55.635: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 03:46:55.638: INFO: starting watch STEP: patching STEP: updating Nov 13 03:46:55.648: INFO: waiting for watch events with expected annotations Nov 13 03:46:55.649: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Nov 13 03:46:55.649: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:55.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-9707" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":2,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:55.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide unchanging, static URL paths for kubernetes api services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112 STEP: testing: /healthz STEP: testing: /api STEP: testing: /apis STEP: testing: /metrics STEP: testing: /openapi/v2 STEP: testing: /version STEP: testing: /logs [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3822" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":3,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:56.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename networkpolicies STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 13 03:46:56.278: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 13 03:46:56.281: INFO: starting watch STEP: patching STEP: updating Nov 13 03:46:56.288: INFO: waiting for watch events with expected annotations Nov 13 03:46:56.288: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Nov 13 03:46:56.288: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "networkpolicies-8473" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:36.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-a6e01c05-c8e1-4c28-84f6-2cd5784ad9c4] STEP: Verifying pods for RC slow-terminating-unready-pod Nov 13 03:46:36.334: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Nov 13 03:46:46.353: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-9cltx]: "NOW: 2021-11-13 03:46:46.349823478 +0000 UTC m=+3.362906020", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-4423.svc.cluster.local Nov 13 03:46:46.353: INFO: Creating new exec pod Nov 13 03:46:52.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4423 exec execpod-lpm9r -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/' Nov 13 03:46:52.628: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/\n" Nov 13 03:46:52.629: INFO: stdout: "NOW: 2021-11-13 03:46:52.620190662 +0000 UTC m=+9.633273204" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-4423 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Nov 13 03:46:57.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4423 exec execpod-lpm9r -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/; test "$?" -ne "0"' Nov 13 03:46:59.313: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/\n+ test 7 -ne 0\n" Nov 13 03:46:59.313: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Nov 13 03:46:59.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4423 exec execpod-lpm9r -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/' Nov 13 03:46:59.795: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-4423.svc.cluster.local:80/\n" Nov 13 03:46:59.795: INFO: stdout: "NOW: 2021-11-13 03:46:59.786076381 +0000 UTC m=+16.799158924" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-4423 STEP: deleting service tolerate-unready in namespace services-4423 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:46:59.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4423" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.534 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 ------------------------------ {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":1,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:46:35.058502 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.059: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.061: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 STEP: Performing setup for networking test in namespace nettest-3658 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:35.175: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:35.207: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:37.210: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:39.215: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:41.212: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:43.212: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:45.211: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:47.214: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:49.212: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:51.212: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:53.211: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:55.211: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:46:55.216: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:46:57.221: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:05.242: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:05.242: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:05.248: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3658" for this suite. S [SKIPPING] [30.224 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: udp [LinuxOnly] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:46:35.287609 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.287: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.289: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 STEP: Performing setup for networking test in namespace nettest-106 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:35.404: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:35.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:37.439: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:39.444: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:41.442: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:43.438: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:45.440: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:47.441: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:49.440: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:51.442: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:53.438: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:55.439: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:46:55.443: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:46:57.450: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:05.490: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:05.491: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:05.498: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:05.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-106" for this suite. S [SKIPPING] [30.243 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check kube-proxy urls [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 STEP: Performing setup for networking test in namespace nettest-6267 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:35.596: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:35.628: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:37.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:39.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:41.633: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:43.632: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:45.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:47.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:49.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:51.634: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:53.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:55.631: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:46:55.636: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:46:57.639: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:05.659: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:05.659: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:05.666: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:05.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6267" for this suite. S [SKIPPING] [30.209 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 STEP: Performing setup for networking test in namespace nettest-6802 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:35.684: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:35.713: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:37.717: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:39.718: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:41.718: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:43.718: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:45.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:47.718: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:49.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:51.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:53.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:55.717: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:57.718: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:46:57.723: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:05.746: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:05.746: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:05.753: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:05.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6802" for this suite. S [SKIPPING] [30.222 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest W1113 03:46:35.078502 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.078: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.080: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 STEP: Performing setup for networking test in namespace nettest-7477 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:35.218: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:35.254: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:37.258: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:39.259: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:41.258: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:43.257: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:45.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:47.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:49.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:51.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:53.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:55.256: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:57.258: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:46:57.263: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:07.286: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:07.286: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:07.294: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:07.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7477" for this suite. S [SKIPPING] [32.248 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:07.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should provide Internet connection for containers [Feature:Networking-IPv4] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97 STEP: Running container which tries to connect to 8.8.8.8 Nov 13 03:47:07.540: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-4057" to be "Succeeded or Failed" Nov 13 03:47:07.542: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261634ms Nov 13 03:47:09.546: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005941888s Nov 13 03:47:11.550: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010045609s Nov 13 03:47:13.555: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014528205s Nov 13 03:47:15.559: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01854764s Nov 13 03:47:17.562: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022245402s Nov 13 03:47:19.566: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025374451s STEP: Saw pod success Nov 13 03:47:19.566: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed" [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:19.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4057" for this suite. • [SLOW TEST:12.154 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide Internet connection for containers [Feature:Networking-IPv4] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97 ------------------------------ {"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":1,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:44.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should be able to handle large requests: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 STEP: Performing setup for networking test in namespace nettest-5534 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:44.234: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:44.265: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:46.268: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:48.268: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:50.268: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:52.269: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:54.268: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:56.268: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:46:58.269: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:00.268: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:02.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:04.268: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:04.272: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:47:06.277: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:20.298: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:20.298: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:20.307: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:20.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5534" for this suite. S [SKIPPING] [36.250 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should be able to handle large requests: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:56.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-3856 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:46:56.913: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:46:56.944: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:58.949: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:00.948: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:02.948: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:04.947: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:06.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:08.948: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:10.948: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:12.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:14.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:16.952: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:18.948: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:18.953: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:47:20.958: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:31.020: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:31.020: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:31.026: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:31.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3856" for this suite. S [SKIPPING] [34.238 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:05.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-5627 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:05.650: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:05.681: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:07.685: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:09.684: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:11.685: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:13.684: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:15.684: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:17.685: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:19.685: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:21.686: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:23.684: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:25.685: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:27.687: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:27.691: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:47:29.696: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:33.718: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:33.718: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:33.726: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:33.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5627" for this suite. S [SKIPPING] [28.196 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:33.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Nov 13 03:47:33.994: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:33.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-6969" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:05.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-2966 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:06.055: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:06.091: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:08.095: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:10.094: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:12.096: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:14.095: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:16.097: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:18.094: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:20.094: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:22.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:24.094: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:26.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:28.094: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:30.096: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:32.097: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:34.095: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:36.100: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:38.094: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:38.100: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:42.141: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:42.141: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:42.147: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:42.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2966" for this suite. S [SKIPPING] [36.213 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:19.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 STEP: Performing setup for networking test in namespace nettest-6297 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:19.995: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:20.047: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:22.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:24.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:26.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:28.051: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:30.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:32.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:34.051: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:36.050: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:38.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:40.052: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:42.053: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:42.059: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:48.094: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:48.094: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:48.101: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:48.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6297" for this suite. S [SKIPPING] [28.232 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should support basic nodePort: udp functionality [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:59.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename network-perf STEP: Waiting for a default service account to be provisioned in namespace [It] should run iperf2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188 Nov 13 03:46:59.945: INFO: deploying iperf2 server Nov 13 03:46:59.948: INFO: Waiting for deployment "iperf2-server-deployment" to complete Nov 13 03:46:59.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Nov 13 03:47:01.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:47:03.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372019, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 13 03:47:05.963: INFO: waiting for iperf2 server endpoints Nov 13 03:47:07.967: INFO: found iperf2 server endpoints Nov 13 03:47:07.967: INFO: waiting for client pods to be running Nov 13 03:47:17.974: INFO: all client pods are ready: 2 pods Nov 13 03:47:17.977: INFO: server pod phase Running Nov 13 03:47:17.977: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:46:59 +0000 UTC Reason: Message:} Nov 13 03:47:17.977: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:47:04 +0000 UTC Reason: Message:} Nov 13 03:47:17.977: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:47:04 +0000 UTC Reason: Message:} Nov 13 03:47:17.977: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-11-13 03:46:59 +0000 UTC Reason: Message:} Nov 13 03:47:17.977: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-11-13 03:47:03 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://d1d73f9e3aa03dc7de3bd056b1fd6de53ce42bb0cae7b481514f78c7b11b2de4 Started:0xc0036a05dc} Nov 13 03:47:17.977: INFO: found 2 matching client pods Nov 13 03:47:17.980: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-8499 PodName:iperf2-clients-pvdvv ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:47:17.980: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:47:18.090: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads" Nov 13 03:47:18.090: INFO: iperf version: Nov 13 03:47:18.090: INFO: attempting to run command 'iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-pvdvv (node node2) Nov 13 03:47:18.092: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-8499 PodName:iperf2-clients-pvdvv ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:47:18.092: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:47:33.297: INFO: Exec stderr: "" Nov 13 03:47:33.297: INFO: output from exec on client pod iperf2-clients-pvdvv (node node2): 20211113034719.227,10.244.4.107,33180,10.233.13.179,6789,3,0.0-1.0,101842944,814743552 20211113034720.217,10.244.4.107,33180,10.233.13.179,6789,3,1.0-2.0,117571584,940572672 20211113034721.270,10.244.4.107,33180,10.233.13.179,6789,3,2.0-3.0,109838336,878706688 20211113034722.233,10.244.4.107,33180,10.233.13.179,6789,3,3.0-4.0,114556928,916455424 20211113034723.240,10.244.4.107,33180,10.233.13.179,6789,3,4.0-5.0,89915392,719323136 20211113034724.215,10.244.4.107,33180,10.233.13.179,6789,3,5.0-6.0,101580800,812646400 20211113034725.239,10.244.4.107,33180,10.233.13.179,6789,3,6.0-7.0,106168320,849346560 20211113034726.217,10.244.4.107,33180,10.233.13.179,6789,3,7.0-8.0,110231552,881852416 20211113034727.226,10.244.4.107,33180,10.233.13.179,6789,3,8.0-9.0,116916224,935329792 20211113034728.237,10.244.4.107,33180,10.233.13.179,6789,3,9.0-10.0,116391936,931135488 20211113034728.237,10.244.4.107,33180,10.233.13.179,6789,3,0.0-10.0,1085014016,867351331 Nov 13 03:47:33.299: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-8499 PodName:iperf2-clients-tk68p ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:47:33.299: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:47:33.548: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads" Nov 13 03:47:33.548: INFO: iperf version: Nov 13 03:47:33.548: INFO: attempting to run command 'iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-tk68p (node node1) Nov 13 03:47:33.551: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-8499 PodName:iperf2-clients-tk68p ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 03:47:33.551: INFO: >>> kubeConfig: /root/.kube/config Nov 13 03:47:48.704: INFO: Exec stderr: "" Nov 13 03:47:48.704: INFO: output from exec on client pod iperf2-clients-tk68p (node node1): 20211113034734.685,10.244.3.246,37054,10.233.13.179,6789,3,0.0-1.0,3135373312,25082986496 20211113034735.682,10.244.3.246,37054,10.233.13.179,6789,3,1.0-2.0,3209560064,25676480512 20211113034736.687,10.244.3.246,37054,10.233.13.179,6789,3,2.0-3.0,3157131264,25257050112 20211113034737.698,10.244.3.246,37054,10.233.13.179,6789,3,3.0-4.0,3288334336,26306674688 20211113034738.668,10.244.3.246,37054,10.233.13.179,6789,3,4.0-5.0,3222011904,25776095232 20211113034739.684,10.244.3.246,37054,10.233.13.179,6789,3,5.0-6.0,2964717568,23717740544 20211113034740.669,10.244.3.246,37054,10.233.13.179,6789,3,6.0-7.0,2569928704,20559429632 20211113034741.676,10.244.3.246,37054,10.233.13.179,6789,3,7.0-8.0,2543452160,20347617280 20211113034742.682,10.244.3.246,37054,10.233.13.179,6789,3,8.0-9.0,2051932160,16415457280 20211113034743.668,10.244.3.246,37054,10.233.13.179,6789,3,9.0-10.0,3225288704,25802309632 20211113034743.668,10.244.3.246,37054,10.233.13.179,6789,3,0.0-10.0,29367730176,23494092513 Nov 13 03:47:48.705: INFO: From To Bandwidth (MB/s) Nov 13 03:47:48.705: INFO: node2 node1 103 Nov 13 03:47:48.705: INFO: node1 node1 2801 [AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:48.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "network-perf-8499" for this suite. • [SLOW TEST:48.793 seconds] [sig-network] Networking IPerf2 [Feature:Networking-Performance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should run iperf2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188 ------------------------------ {"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":2,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:20.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update nodePort: http [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369 STEP: Performing setup for networking test in namespace nettest-6740 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:20.619: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:20.892: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:22.896: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:24.896: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:26.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:28.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:30.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:32.899: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:34.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:36.898: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:38.896: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:40.897: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:42.898: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:42.903: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:47:48.938: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:47:48.938: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:48.945: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:48.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6740" for this suite. S [SKIPPING] [28.452 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update nodePort: http [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:48.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node node1 Nov 13 03:47:48.341: INFO: Creating new exec pod Nov 13 03:47:52.361: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1 Nov 13 03:47:52.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9436 exec execpod-noendpointsbxkqr -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Nov 13 03:47:53.632: INFO: rc: 1 Nov 13 03:47:53.632: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9436 exec execpod-noendpointsbxkqr -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 REFUSED command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:53.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9436" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:5.331 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 ------------------------------ {"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":2,"skipped":350,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:46:35.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services W1113 03:46:35.048729 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 03:46:35.048: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 03:46:35.050: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 STEP: creating service-disabled in namespace services-7600 STEP: creating service service-proxy-disabled in namespace services-7600 STEP: creating replication controller service-proxy-disabled in namespace services-7600 I1113 03:46:35.063710 33 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-7600, replica count: 3 I1113 03:46:38.114981 33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:41.115598 33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:44.115733 33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:47.117756 33 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-7600 STEP: creating service service-proxy-toggled in namespace services-7600 STEP: creating replication controller service-proxy-toggled in namespace services-7600 I1113 03:46:47.130942 33 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-7600, replica count: 3 I1113 03:46:50.183358 33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:53.184337 33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:46:56.186199 33 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Nov 13 03:46:56.189: INFO: Creating new host exec pod Nov 13 03:46:56.205: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:46:58.209: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:00.210: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:02.211: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:47:02.211: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:47:10.228: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done" in pod services-7600/verify-service-up-host-exec-pod Nov 13 03:47:10.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done' Nov 13 03:47:11.172: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n" Nov 13 03:47:11.172: INFO: stdout: "service-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\n" Nov 13 03:47:11.172: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done" in pod services-7600/verify-service-up-exec-pod-7bq8z Nov 13 03:47:11.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-up-exec-pod-7bq8z -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done' Nov 13 03:47:11.703: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n" Nov 13 03:47:11.704: INFO: stdout: "service-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7600 STEP: Deleting pod verify-service-up-exec-pod-7bq8z in namespace services-7600 STEP: verifying service-disabled is not up Nov 13 03:47:11.718: INFO: Creating new host exec pod Nov 13 03:47:11.732: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:13.735: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:15.736: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:17.737: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:19.735: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:21.736: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:23.735: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:25.736: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:47:25.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed' Nov 13 03:47:28.188: INFO: rc: 28 Nov 13 03:47:28.188: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed" in pod services-7600/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.12.179:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7600 STEP: adding service-proxy-name label STEP: verifying service is not up Nov 13 03:47:28.203: INFO: Creating new host exec pod Nov 13 03:47:28.218: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:30.222: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:32.222: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:47:32.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.53.39:80 && echo service-down-failed' Nov 13 03:47:34.487: INFO: rc: 28 Nov 13 03:47:34.488: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.53.39:80 && echo service-down-failed" in pod services-7600/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.53.39:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.53.39:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7600 STEP: removing service-proxy-name annotation STEP: verifying service is up Nov 13 03:47:34.501: INFO: Creating new host exec pod Nov 13 03:47:34.513: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:36.517: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:38.517: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:40.517: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:47:40.517: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:47:46.534: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done" in pod services-7600/verify-service-up-host-exec-pod Nov 13 03:47:46.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done' Nov 13 03:47:46.905: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n" Nov 13 03:47:46.906: INFO: stdout: "service-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\n" Nov 13 03:47:46.906: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done" in pod services-7600/verify-service-up-exec-pod-dv7gz Nov 13 03:47:46.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-up-exec-pod-dv7gz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.53.39:80 2>&1 || true; echo; done' Nov 13 03:47:47.291: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.53.39:80\n+ echo\n" Nov 13 03:47:47.292: INFO: stdout: "service-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-chnvn\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-hfr28\nservice-proxy-toggled-stg6v\nservice-proxy-toggled-stg6v\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7600 STEP: Deleting pod verify-service-up-exec-pod-dv7gz in namespace services-7600 STEP: verifying service-disabled is still not up Nov 13 03:47:47.305: INFO: Creating new host exec pod Nov 13 03:47:47.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:49.323: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:51.331: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:47:51.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed' Nov 13 03:47:53.641: INFO: rc: 28 Nov 13 03:47:53.641: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed" in pod services-7600/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7600 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.12.179:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.12.179:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7600 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:53.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7600" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:78.632 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:48.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-9176 Nov 13 03:47:48.898: INFO: hairpin-test cluster ip: 10.233.60.183 STEP: creating a client/server pod Nov 13 03:47:48.913: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:50.917: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:52.919: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-9176 to expose endpoints map[hairpin:[8080]] Nov 13 03:47:52.928: INFO: successfully validated that service hairpin-test in namespace services-9176 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself Nov 13 03:47:53.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9176 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080' Nov 13 03:47:54.324: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n" Nov 13 03:47:54.324: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Nov 13 03:47:54.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9176 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.60.183 8080' Nov 13 03:47:55.044: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.60.183 8080\nConnection to 10.233.60.183 8080 port [tcp/http-alt] succeeded!\n" Nov 13 03:47:55.044: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:47:55.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9176" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:6.185 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 ------------------------------ {"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":3,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:31.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update nodePort: udp [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 STEP: Performing setup for networking test in namespace nettest-9144 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:31.638: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:31.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:33.710: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:35.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:37.712: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:39.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:41.722: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:43.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:45.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:47.711: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:49.709: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:51.709: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:47:51.714: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:47:53.716: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:48:03.754: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:48:03.754: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:48:03.761: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:48:03.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9144" for this suite. S [SKIPPING] [32.258 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update nodePort: udp [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:53.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2458.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2458.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2458.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2458.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2458.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2458.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 13 03:48:05.872: INFO: DNS probes using dns-2458/dns-test-5cf10bbb-94fb-4103-b17d-f44cb892ea3d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:48:05.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2458" for this suite. • [SLOW TEST:12.098 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for the cluster [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":3,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:42.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 STEP: Performing setup for networking test in namespace nettest-4024 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:42.272: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:42.304: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:44.307: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:46.308: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:48.307: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:50.307: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:52.309: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:54.307: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:56.307: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:58.308: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:00.306: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:02.309: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:04.307: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:48:04.312: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:48:10.331: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:48:10.331: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:48:10.338: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:48:10.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4024" for this suite. S [SKIPPING] [28.184 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 STEP: creating service-headless in namespace services-9812 STEP: creating service service-headless in namespace services-9812 STEP: creating replication controller service-headless in namespace services-9812 I1113 03:47:05.979930 32 runners.go:190] Created replication controller with name: service-headless, namespace: services-9812, replica count: 3 I1113 03:47:09.031290 32 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:12.031749 32 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:15.032396 32 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:18.033238 32 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:21.034502 32 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:24.034960 32 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-9812 STEP: creating service service-headless-toggled in namespace services-9812 STEP: creating replication controller service-headless-toggled in namespace services-9812 I1113 03:47:24.050452 32 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-9812, replica count: 3 I1113 03:47:27.101693 32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:30.102093 32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 03:47:33.103364 32 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Nov 13 03:47:33.107: INFO: Creating new host exec pod Nov 13 03:47:33.122: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:35.126: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:37.126: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:47:37.126: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:47:47.145: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done" in pod services-9812/verify-service-up-host-exec-pod Nov 13 03:47:47.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done' Nov 13 03:47:47.590: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n" Nov 13 03:47:47.591: INFO: stdout: "service-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\n" Nov 13 03:47:47.592: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done" in pod services-9812/verify-service-up-exec-pod-p9nvg Nov 13 03:47:47.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-up-exec-pod-p9nvg -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done' Nov 13 03:47:48.004: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n" Nov 13 03:47:48.005: INFO: stdout: "service-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9812 STEP: Deleting pod verify-service-up-exec-pod-p9nvg in namespace services-9812 STEP: verifying service-headless is not up Nov 13 03:47:48.022: INFO: Creating new host exec pod Nov 13 03:47:48.034: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:50.039: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:52.038: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:47:52.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed' Nov 13 03:47:54.294: INFO: rc: 28 Nov 13 03:47:54.294: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed" in pod services-9812/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.63.108:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9812 STEP: adding service.kubernetes.io/headless label STEP: verifying service is not up Nov 13 03:47:54.307: INFO: Creating new host exec pod Nov 13 03:47:54.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:56.323: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:58.323: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:48:00.324: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:48:00.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.27.80:80 && echo service-down-failed' Nov 13 03:48:02.656: INFO: rc: 28 Nov 13 03:48:02.656: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.27.80:80 && echo service-down-failed" in pod services-9812/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.27.80:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.27.80:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9812 STEP: removing service.kubernetes.io/headless annotation STEP: verifying service is up Nov 13 03:48:02.671: INFO: Creating new host exec pod Nov 13 03:48:02.682: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:48:04.685: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Nov 13 03:48:04.686: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Nov 13 03:48:08.702: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done" in pod services-9812/verify-service-up-host-exec-pod Nov 13 03:48:08.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done' Nov 13 03:48:09.076: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n" Nov 13 03:48:09.077: INFO: stdout: "service-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\n" Nov 13 03:48:09.077: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done" in pod services-9812/verify-service-up-exec-pod-tnzdh Nov 13 03:48:09.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-up-exec-pod-tnzdh -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.27.80:80 2>&1 || true; echo; done' Nov 13 03:48:09.622: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.27.80:80\n+ echo\n" Nov 13 03:48:09.622: INFO: stdout: "service-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-lvlnw\nservice-headless-toggled-v2ksv\nservice-headless-toggled-v2ksv\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\nservice-headless-toggled-lvlnw\nservice-headless-toggled-h2vzz\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9812 STEP: Deleting pod verify-service-up-exec-pod-tnzdh in namespace services-9812 STEP: verifying service-headless is still not up Nov 13 03:48:09.635: INFO: Creating new host exec pod Nov 13 03:48:09.648: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:48:11.652: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:48:13.652: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Nov 13 03:48:13.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed' Nov 13 03:48:15.980: INFO: rc: 28 Nov 13 03:48:15.980: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed" in pod services-9812/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9812 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.63.108:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.233.63.108:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9812 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:48:15.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9812" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:70.049 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":1,"skipped":333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:47:49.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 STEP: Performing setup for networking test in namespace nettest-2743 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 13 03:47:49.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:47:49.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:51.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 13 03:47:53.239: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:55.239: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:57.240: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:47:59.240: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:01.240: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:03.239: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:05.239: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:07.239: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 13 03:48:09.239: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 13 03:48:09.244: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 13 03:48:11.249: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 13 03:48:21.271: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Nov 13 03:48:21.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 13 03:48:21.278: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 03:48:21.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-2743" for this suite. S [SKIPPING] [32.196 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 Requires at least 2 nodes (not -1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 03:48:21.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Nov 13 03:48:21.440: INFO: (0) /api/v1/nodes/node2:10250/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Nov 13 03:48:16.045: INFO: Waiting up to 2m0s to get response from 10.244.3.17:8080
Nov 13 03:48:16.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test4dn4v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip'
Nov 13 03:48:16.430: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip\n"
Nov 13 03:48:16.430: INFO: stdout: "10.244.4.129:52372"
STEP: Verifying the preserved source ip
Nov 13 03:48:16.430: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Nov 13 03:48:16.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test4dn4v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Nov 13 03:48:16.680: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Nov 13 03:48:16.680: INFO: stdout: "10.244.4.129:60826"
STEP: Verifying the preserved source ip
Nov 13 03:48:16.680: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Nov 13 03:48:16.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test4dn4v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Nov 13 03:48:17.210: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Nov 13 03:48:17.210: INFO: stdout: "10.244.4.129:35602"
STEP: Verifying the preserved source ip
Nov 13 03:48:17.210: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Nov 13 03:48:17.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test4dn4v -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Nov 13 03:48:17.494: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Nov 13 03:48:17.494: INFO: stdout: "10.244.4.129:33436"
STEP: Verifying the preserved source ip
Nov 13 03:48:17.494: INFO: Waiting up to 2m0s to get response from 10.244.4.129:8080
Nov 13 03:48:17.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test8kq54 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip'
Nov 13 03:48:17.903: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip\n"
Nov 13 03:48:17.903: INFO: stdout: "10.244.3.17:40918"
STEP: Verifying the preserved source ip
Nov 13 03:48:17.903: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Nov 13 03:48:17.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test8kq54 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Nov 13 03:48:18.157: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Nov 13 03:48:18.157: INFO: stdout: "10.244.3.17:43684"
STEP: Verifying the preserved source ip
Nov 13 03:48:18.157: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Nov 13 03:48:18.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test8kq54 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Nov 13 03:48:18.447: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Nov 13 03:48:18.447: INFO: stdout: "10.244.3.17:57706"
STEP: Verifying the preserved source ip
Nov 13 03:48:18.447: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Nov 13 03:48:18.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test8kq54 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Nov 13 03:48:18.695: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Nov 13 03:48:18.695: INFO: stdout: "10.244.3.17:54066"
STEP: Verifying the preserved source ip
Nov 13 03:48:18.695: INFO: Waiting up to 2m0s to get response from 10.244.4.129:8080
Nov 13 03:48:18.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test99pgn -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip'
Nov 13 03:48:18.956: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip\n"
Nov 13 03:48:18.956: INFO: stdout: "10.244.2.5:57372"
STEP: Verifying the preserved source ip
Nov 13 03:48:18.956: INFO: Waiting up to 2m0s to get response from 10.244.3.17:8080
Nov 13 03:48:18.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test99pgn -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip'
Nov 13 03:48:19.184: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip\n"
Nov 13 03:48:19.184: INFO: stdout: "10.244.2.5:38752"
STEP: Verifying the preserved source ip
Nov 13 03:48:19.184: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Nov 13 03:48:19.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test99pgn -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Nov 13 03:48:19.406: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Nov 13 03:48:19.406: INFO: stdout: "10.244.2.5:56998"
STEP: Verifying the preserved source ip
Nov 13 03:48:19.406: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Nov 13 03:48:19.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-test99pgn -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Nov 13 03:48:19.846: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Nov 13 03:48:19.846: INFO: stdout: "10.244.2.5:35402"
STEP: Verifying the preserved source ip
Nov 13 03:48:19.847: INFO: Waiting up to 2m0s to get response from 10.244.4.129:8080
Nov 13 03:48:19.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testhhzng -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip'
Nov 13 03:48:20.092: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip\n"
Nov 13 03:48:20.092: INFO: stdout: "10.244.1.8:53518"
STEP: Verifying the preserved source ip
Nov 13 03:48:20.092: INFO: Waiting up to 2m0s to get response from 10.244.3.17:8080
Nov 13 03:48:20.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testhhzng -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip'
Nov 13 03:48:20.316: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip\n"
Nov 13 03:48:20.316: INFO: stdout: "10.244.1.8:33844"
STEP: Verifying the preserved source ip
Nov 13 03:48:20.316: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Nov 13 03:48:20.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testhhzng -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Nov 13 03:48:20.555: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Nov 13 03:48:20.555: INFO: stdout: "10.244.1.8:60814"
STEP: Verifying the preserved source ip
Nov 13 03:48:20.555: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Nov 13 03:48:20.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testhhzng -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Nov 13 03:48:20.805: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Nov 13 03:48:20.805: INFO: stdout: "10.244.1.8:51660"
STEP: Verifying the preserved source ip
Nov 13 03:48:20.805: INFO: Waiting up to 2m0s to get response from 10.244.4.129:8080
Nov 13 03:48:20.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testswqd8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip'
Nov 13 03:48:21.061: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.129:8080/clientip\n"
Nov 13 03:48:21.061: INFO: stdout: "10.244.0.10:59092"
STEP: Verifying the preserved source ip
Nov 13 03:48:21.061: INFO: Waiting up to 2m0s to get response from 10.244.3.17:8080
Nov 13 03:48:21.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testswqd8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip'
Nov 13 03:48:21.378: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.17:8080/clientip\n"
Nov 13 03:48:21.378: INFO: stdout: "10.244.0.10:54972"
STEP: Verifying the preserved source ip
Nov 13 03:48:21.378: INFO: Waiting up to 2m0s to get response from 10.244.2.5:8080
Nov 13 03:48:21.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testswqd8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip'
Nov 13 03:48:21.628: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.5:8080/clientip\n"
Nov 13 03:48:21.628: INFO: stdout: "10.244.0.10:60732"
STEP: Verifying the preserved source ip
Nov 13 03:48:21.628: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Nov 13 03:48:21.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-4914 exec no-snat-testswqd8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Nov 13 03:48:21.872: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Nov 13 03:48:21.873: INFO: stdout: "10.244.0.10:50708"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:21.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-4914" for this suite.


• [SLOW TEST:15.931 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":4,"skipped":442,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:22.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-9187
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-9187
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:22.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9187" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":5,"skipped":571,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:47:55.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-8808
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:47:55.362: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:47:55.394: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:57.399: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:59.398: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:01.399: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:03.399: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:05.397: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:07.434: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:09.398: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:11.399: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:13.399: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:15.397: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:48:15.402: INFO: The status of Pod netserver-1 is Running (Ready = false)
Nov 13 03:48:17.406: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:48:23.430: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:48:23.430: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:48:23.436: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:23.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8808" for this suite.


S [SKIPPING] [28.210 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:23.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov 13 03:48:23.523: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:23.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-4601" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:23.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Nov 13 03:48:23.721: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:23.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-1359" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:03.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Nov 13 03:48:03.873: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:05.876: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:07.878: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:09.877: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Nov 13 03:48:09.892: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:12.001: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:13.896: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Nov 13 03:48:15.914: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:17.919: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:19.918: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:21.919: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:23.917: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Nov 13 03:48:23.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-2760 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Nov 13 03:48:25.245: INFO: stderr: "+ conntrack -L -f ipv4 -d 10.10.190.208\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 6 flow entries have been shown.\n"
Nov 13 03:48:25.245: INFO: stdout: "tcp      6 3595 CLOSE_WAIT src=10.244.3.22 dst=10.10.190.208 sport=38240 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=13846 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Nov 13 03:48:25.245: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3595 CLOSE_WAIT src=10.244.3.22 dst=10.10.190.208 sport=38240 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=13846 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:25.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-2760" for this suite.


• [SLOW TEST:21.424 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":5,"skipped":973,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:25.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Nov 13 03:48:25.387: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:25.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1933" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:25.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Nov 13 03:48:25.496: INFO: (0) /api/v1/nodes/node1/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Nov 13 03:48:16.364: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-196788c9-af3d-47dd-a33a-d865520c1b74  dns-8564  a3c16e42-6d14-45c9-a9af-1e2a5e3435e5 144453 0 2021-11-13 03:48:16 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-13 03:48:16 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-gm4x4,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-4hwp6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4hwp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 13 03:48:22.377: INFO: testServerIP is 10.244.4.131
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Nov 13 03:48:22.387: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-8564  43c8ccc1-2840-4fc6-b634-d72d6a250bd2 144706 0 2021-11-13 03:48:22 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-11-13 03:48:22 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sgg9b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.131],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Nov 13 03:48:34.395: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-8564 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:34.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Nov 13 03:48:34.484: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-8564 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:34.484: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:34.592: INFO: Deleting pod e2e-dns-utils...
Nov 13 03:48:34.600: INFO: Deleting pod e2e-configmap-dns-server-196788c9-af3d-47dd-a33a-d865520c1b74...
Nov 13 03:48:34.606: INFO: Deleting configmap e2e-coredns-configmap-gm4x4...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:34.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8564" for this suite.


• [SLOW TEST:18.293 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":490,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
Nov 13 03:48:34.897: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:47:53.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-8890
STEP: creating a client pod for probing the service svc-udp
Nov 13 03:47:53.985: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:55.988: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:57.987: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:59.987: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:01.988: INFO: The status of Pod pod-client is Running (Ready = true)
Nov 13 03:48:02.003: INFO: Pod client logs: Sat Nov 13 03:47:59 UTC 2021
Sat Nov 13 03:47:59 UTC 2021 Try: 1

Sat Nov 13 03:47:59 UTC 2021 Try: 2

Sat Nov 13 03:47:59 UTC 2021 Try: 3

Sat Nov 13 03:47:59 UTC 2021 Try: 4

Sat Nov 13 03:47:59 UTC 2021 Try: 5

Sat Nov 13 03:47:59 UTC 2021 Try: 6

Sat Nov 13 03:47:59 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov 13 03:48:02.014: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:04.018: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:06.020: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:08.020: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8890 to expose endpoints map[pod-server-1:[80]]
Nov 13 03:48:08.032: INFO: successfully validated that service svc-udp in namespace conntrack-8890 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Nov 13 03:48:18.062: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:20.065: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:22.066: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:24.066: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:26.068: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Nov 13 03:48:26.070: INFO: Cleaning up pod-server-1 pod
Nov 13 03:48:26.077: INFO: Waiting for pod pod-server-1 to disappear
Nov 13 03:48:26.080: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8890 to expose endpoints map[pod-server-2:[80]]
Nov 13 03:48:26.087: INFO: successfully validated that service svc-udp in namespace conntrack-8890 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8890" for this suite.


• [SLOW TEST:42.497 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":180,"failed":0}
Nov 13 03:48:36.439: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:46:36.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-883
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:46:36.463: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:46:36.496: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:46:38.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:46:40.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:46:42.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:46:44.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:46:46.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:48.498: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:50.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:52.500: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:54.501: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:56.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:46:58.499: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:46:58.504: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:47:06.543: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:47:06.543: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Nov 13 03:47:06.564: INFO: Service node-port-service in namespace nettest-883 found.
Nov 13 03:47:06.577: INFO: Service session-affinity-service in namespace nettest-883 found.
STEP: Waiting for NodePort service to expose endpoint
Nov 13 03:47:07.580: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Nov 13 03:47:08.584: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: pod-Service(hostNetwork): http
STEP: dialing(http) test-container-pod --> 10.233.13.7:80 (config.clusterIP)
Nov 13 03:47:08.588: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.233.13.7&port=80&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:08.588: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:09.079: INFO: Waiting for responses: map[node2:{}]
Nov 13 03:47:11.084: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.233.13.7&port=80&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:11.084: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:11.324: INFO: Waiting for responses: map[node2:{}]
Nov 13 03:47:13.328: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.233.13.7&port=80&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:13.329: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:15.654: INFO: Waiting for responses: map[]
Nov 13 03:47:15.654: INFO: reached 10.233.13.7 after 2/34 tries
STEP: dialing(http) test-container-pod --> 10.10.190.207:32384 (nodeIP)
Nov 13 03:47:15.658: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:15.658: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:15.819: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:17.824: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:17.824: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:17.933: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:19.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:19.936: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:20.053: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:22.060: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:22.060: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:22.287: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:24.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:24.290: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:24.382: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:26.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:26.386: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:26.834: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:28.838: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:28.838: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:28.924: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:30.928: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:30.928: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:31.154: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:33.158: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:33.158: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:33.434: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:35.437: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:35.437: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:36.198: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:38.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:38.202: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:38.436: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:40.443: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:40.443: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:41.099: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:43.103: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:43.103: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:43.725: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:45.729: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:45.729: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:45.834: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:47.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:47.839: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:47.983: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:49.987: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:49.987: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:50.323: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:52.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:52.326: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:52.419: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:54.422: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:54.422: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:54.620: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:56.624: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:56.624: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:56.934: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:47:58.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:47:58.938: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:47:59.103: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:01.108: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:01.108: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:01.237: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:03.240: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:03.240: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:03.327: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:05.330: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:05.330: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:05.428: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:07.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:07.434: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:07.819: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:09.823: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:09.823: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:09.910: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:12.002: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:12.002: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:12.254: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:14.260: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:14.260: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:14.526: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:16.530: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:16.530: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:16.734: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:18.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:18.739: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:18.824: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:20.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:20.827: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:20.922: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:22.926: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:22.926: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:25.236: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:27.240: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:27.240: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:28.044: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:30.048: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:30.048: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:30.449: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:32.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'] Namespace:nettest-883 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Nov 13 03:48:32.452: INFO: >>> kubeConfig: /root/.kube/config
Nov 13 03:48:32.548: INFO: Waiting for responses: map[node1:{} node2:{}]
Nov 13 03:48:34.550: INFO: 
Output of kubectl describe pod nettest-883/netserver-0:

Nov 13 03:48:34.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-883 describe pod netserver-0 --namespace=nettest-883'
Nov 13 03:48:34.743: INFO: stderr: ""
Nov 13 03:48:34.743: INFO: stdout: "Name:         netserver-0\nNamespace:    nettest-883\nPriority:     0\nNode:         node1/10.10.190.207\nStart Time:   Sat, 13 Nov 2021 03:46:36 +0000\nLabels:       selector-c9dde520-4707-491b-934a-32a3d54d07f2=true\nAnnotations:  kubernetes.io/psp: privileged\nStatus:       Running\nIP:           10.10.190.207\nIPs:\n  IP:  10.10.190.207\nContainers:\n  webserver:\n    Container ID:  docker://ed02ca3be2110f1992fd666eee12d4831a686a3d4eeb4cfaeda2f2e0beee6296\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    8080/TCP, 8081/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n      --udp-listen-addresses=$(HOST_IP),$(POD_IPS)\n    State:          Running\n      Started:      Sat, 13 Nov 2021 03:46:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:\n      HOST_IP:   (v1:status.hostIP)\n      POD_IPS:   (v1:status.podIPs)\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rg52l (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-rg52l:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node1\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  118s  default-scheduler  Successfully assigned nettest-883/netserver-0 to node1\n  Normal  Pulling    114s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     113s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 559.222924ms\n  Normal  Created    113s  kubelet            Created container webserver\n  Normal  Started    112s  kubelet            Started container webserver\n"
Nov 13 03:48:34.743: INFO: Name:         netserver-0
Namespace:    nettest-883
Priority:     0
Node:         node1/10.10.190.207
Start Time:   Sat, 13 Nov 2021 03:46:36 +0000
Labels:       selector-c9dde520-4707-491b-934a-32a3d54d07f2=true
Annotations:  kubernetes.io/psp: privileged
Status:       Running
IP:           10.10.190.207
IPs:
  IP:  10.10.190.207
Containers:
  webserver:
    Container ID:  docker://ed02ca3be2110f1992fd666eee12d4831a686a3d4eeb4cfaeda2f2e0beee6296
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    8080/TCP, 8081/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
      --udp-listen-addresses=$(HOST_IP),$(POD_IPS)
    State:          Running
      Started:      Sat, 13 Nov 2021 03:46:42 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:
      HOST_IP:   (v1:status.hostIP)
      POD_IPS:   (v1:status.podIPs)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rg52l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-rg52l:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node1
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  118s  default-scheduler  Successfully assigned nettest-883/netserver-0 to node1
  Normal  Pulling    114s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     113s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 559.222924ms
  Normal  Created    113s  kubelet            Created container webserver
  Normal  Started    112s  kubelet            Started container webserver

Nov 13 03:48:34.743: INFO: 
Output of kubectl describe pod nettest-883/netserver-1:

Nov 13 03:48:34.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=nettest-883 describe pod netserver-1 --namespace=nettest-883'
Nov 13 03:48:34.932: INFO: stderr: ""
Nov 13 03:48:34.932: INFO: stdout: "Name:         netserver-1\nNamespace:    nettest-883\nPriority:     0\nNode:         node2/10.10.190.208\nStart Time:   Sat, 13 Nov 2021 03:46:36 +0000\nLabels:       selector-c9dde520-4707-491b-934a-32a3d54d07f2=true\nAnnotations:  kubernetes.io/psp: privileged\nStatus:       Running\nIP:           10.10.190.208\nIPs:\n  IP:  10.10.190.208\nContainers:\n  webserver:\n    Container ID:  docker://8a21613e6a967fe28cccc848a573636ed928a33d347495a2cd032c6c01cb9891\n    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Ports:         8080/TCP, 8081/UDP\n    Host Ports:    8080/TCP, 8081/UDP\n    Args:\n      netexec\n      --http-port=8080\n      --udp-port=8081\n      --udp-listen-addresses=$(HOST_IP),$(POD_IPS)\n    State:          Running\n      Started:      Sat, 13 Nov 2021 03:46:43 +0000\n    Ready:          True\n    Restart Count:  0\n    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n    Environment:\n      HOST_IP:   (v1:status.hostIP)\n      POD_IPS:   (v1:status.podIPs)\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fv4pm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-fv4pm:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       \n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              kubernetes.io/hostname=node2\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  118s  default-scheduler  Successfully assigned nettest-883/netserver-1 to node2\n  Normal  Pulling    113s  kubelet            Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n  Normal  Pulled     111s  kubelet            Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 2.100847836s\n  Normal  Created    111s  kubelet            Created container webserver\n  Normal  Started    111s  kubelet            Started container webserver\n"
Nov 13 03:48:34.932: INFO: Name:         netserver-1
Namespace:    nettest-883
Priority:     0
Node:         node2/10.10.190.208
Start Time:   Sat, 13 Nov 2021 03:46:36 +0000
Labels:       selector-c9dde520-4707-491b-934a-32a3d54d07f2=true
Annotations:  kubernetes.io/psp: privileged
Status:       Running
IP:           10.10.190.208
IPs:
  IP:  10.10.190.208
Containers:
  webserver:
    Container ID:  docker://8a21613e6a967fe28cccc848a573636ed928a33d347495a2cd032c6c01cb9891
    Image:         k8s.gcr.io/e2e-test-images/agnhost:2.32
    Image ID:      docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1
    Ports:         8080/TCP, 8081/UDP
    Host Ports:    8080/TCP, 8081/UDP
    Args:
      netexec
      --http-port=8080
      --udp-port=8081
      --udp-listen-addresses=$(HOST_IP),$(POD_IPS)
    State:          Running
      Started:      Sat, 13 Nov 2021 03:46:43 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8080/healthz delay=10s timeout=30s period=10s #success=1 #failure=3
    Environment:
      HOST_IP:   (v1:status.hostIP)
      POD_IPS:   (v1:status.podIPs)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fv4pm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-fv4pm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/hostname=node2
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  118s  default-scheduler  Successfully assigned nettest-883/netserver-1 to node2
  Normal  Pulling    113s  kubelet            Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
  Normal  Pulled     111s  kubelet            Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.100847836s
  Normal  Created    111s  kubelet            Created container webserver
  Normal  Started    111s  kubelet            Started container webserver

Nov 13 03:48:34.932: INFO: encountered error during dial (did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'
retrieved map[]
expected map[node1:{} node2:{}])
Nov 13 03:48:34.933: FAIL: failed dialing endpoint, did not find expected responses... 
Tries 34
Command curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'
retrieved map[]
expected map[node1:{} node2:{}]

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001e2a300)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001e2a300)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001e2a300, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nettest-883".
STEP: Found 20 events.
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:36 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-883/netserver-0 to node1
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:36 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-883/netserver-1 to node2
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:40 +0000 UTC - event for netserver-0: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:41 +0000 UTC - event for netserver-0: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 559.222924ms
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:41 +0000 UTC - event for netserver-0: {kubelet node1} Created: Created container webserver
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:41 +0000 UTC - event for netserver-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:42 +0000 UTC - event for netserver-0: {kubelet node1} Started: Started container webserver
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:43 +0000 UTC - event for netserver-1: {kubelet node2} Started: Started container webserver
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:43 +0000 UTC - event for netserver-1: {kubelet node2} Created: Created container webserver
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:43 +0000 UTC - event for netserver-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 2.100847836s
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:58 +0000 UTC - event for host-test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-883/host-test-container-pod to node2
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:46:58 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-883/test-container-pod to node1
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:01 +0000 UTC - event for host-test-container-pod: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:02 +0000 UTC - event for host-test-container-pod: {kubelet node2} Created: Created container agnhost-container
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:02 +0000 UTC - event for host-test-container-pod: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 836.635691ms
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:02 +0000 UTC - event for test-container-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 321.437443ms
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:02 +0000 UTC - event for test-container-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:03 +0000 UTC - event for host-test-container-pod: {kubelet node2} Started: Started container agnhost-container
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:03 +0000 UTC - event for test-container-pod: {kubelet node1} Created: Created container webserver
Nov 13 03:48:34.939: INFO: At 2021-11-13 03:47:03 +0000 UTC - event for test-container-pod: {kubelet node1} Started: Started container webserver
Nov 13 03:48:34.942: INFO: POD                      NODE   PHASE    GRACE  CONDITIONS
Nov 13 03:48:34.942: INFO: host-test-container-pod  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:58 +0000 UTC  }]
Nov 13 03:48:34.942: INFO: netserver-0              node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:36 +0000 UTC  }]
Nov 13 03:48:34.943: INFO: netserver-1              node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:36 +0000 UTC  }]
Nov 13 03:48:34.943: INFO: test-container-pod       node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:46:58 +0000 UTC  }]
Nov 13 03:48:34.943: INFO: 
Nov 13 03:48:34.947: INFO: 
Logging node info for node master1
Nov 13 03:48:34.950: INFO: Node Info: &Node{ObjectMeta:{master1    56d66c54-e52b-494a-a758-e4b658c4b245 145032 0 2021-11-12 21:05:50 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:48:32 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:48:34.950: INFO: 
Logging kubelet events for node master1
Nov 13 03:48:34.952: INFO: 
Logging pods the kubelet thinks is on node master1
Nov 13 03:48:34.971: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.971: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:48:34.971: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:48:34.971: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:48:34.971: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:48:34.971: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.971: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:48:34.971: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.971: INFO: 	Container coredns ready: true, restart count 2
Nov 13 03:48:34.971: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:34.971: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:34.971: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:48:34.972: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.972: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov 13 03:48:34.972: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.972: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:48:34.972: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:34.972: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:48:34.972: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:34.972: INFO: 	Container docker-registry ready: true, restart count 0
Nov 13 03:48:34.972: INFO: 	Container nginx ready: true, restart count 0
W1113 03:48:34.986118      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:48:35.072: INFO: 
Latency metrics for node master1
Nov 13 03:48:35.072: INFO: 
Logging node info for node master2
Nov 13 03:48:35.077: INFO: Node Info: &Node{ObjectMeta:{master2    9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 144937 0 2021-11-12 21:06:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:48:35.078: INFO: 
Logging kubelet events for node master2
Nov 13 03:48:35.080: INFO: 
Logging pods the kubelet thinks is on node master2
Nov 13 03:48:35.104: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container nfd-controller ready: true, restart count 0
Nov 13 03:48:35.104: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:48:35.104: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:48:35.104: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:48:35.104: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:48:35.104: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:48:35.104: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:48:35.104: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container coredns ready: true, restart count 1
Nov 13 03:48:35.104: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.104: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:48:35.104: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.104: INFO: 	Container kube-controller-manager ready: true, restart count 2
W1113 03:48:35.117295      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:48:35.194: INFO: 
Latency metrics for node master2
Nov 13 03:48:35.194: INFO: 
Logging node info for node master3
Nov 13 03:48:35.197: INFO: Node Info: &Node{ObjectMeta:{master3    fce0cd54-e4d8-4ce1-b720-522aad2d7989 144934 0 2021-11-12 21:06:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:48:29 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:48:35.197: INFO: 
Logging kubelet events for node master3
Nov 13 03:48:35.200: INFO: 
Logging pods the kubelet thinks is on node master3
Nov 13 03:48:35.215: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:48:35.215: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov 13 03:48:35.215: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:48:35.215: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.215: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:48:35.215: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:48:35.215: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:48:35.215: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:48:35.215: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:48:35.215: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.215: INFO: 	Container autoscaler ready: true, restart count 1
W1113 03:48:35.229221      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:48:35.299: INFO: 
Latency metrics for node master3
Nov 13 03:48:35.299: INFO: 
Logging node info for node node1
Nov 13 03:48:35.302: INFO: Node Info: &Node{ObjectMeta:{node1    6ceb907c-9809-4d18-88c6-b1e10ba80f97 144992 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-13 01:56:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:30 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:30 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:30 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:48:30 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:48:35.303: INFO: 
Logging kubelet events for node node1
Nov 13 03:48:35.305: INFO: 
Logging pods the kubelet thinks is on node node1
Nov 13 03:48:35.323: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.323: INFO: netserver-0 started at 2021-11-13 03:47:49 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container webserver ready: true, restart count 0
Nov 13 03:48:35.323: INFO: netserver-0 started at 2021-11-13 03:48:24 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.323: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:48:35.323: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:48:35.323: INFO: service-headless-toggled-v2ksv started at 2021-11-13 03:47:24 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container service-headless-toggled ready: true, restart count 0
Nov 13 03:48:35.323: INFO: startup-script started at 2021-11-13 03:47:42 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container startup-script ready: true, restart count 0
Nov 13 03:48:35.323: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:48:35.323: INFO: 	Container init ready: false, restart count 0
Nov 13 03:48:35.323: INFO: 	Container install ready: false, restart count 0
Nov 13 03:48:35.323: INFO: netserver-0 started at 2021-11-13 03:46:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container webserver ready: true, restart count 0
Nov 13 03:48:35.323: INFO: e2e-net-client started at 2021-11-13 03:48:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container e2e-net-client ready: false, restart count 0
Nov 13 03:48:35.323: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:48:35.323: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov 13 03:48:35.323: INFO: pod-client started at 2021-11-13 03:47:53 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container pod-client ready: true, restart count 0
Nov 13 03:48:35.323: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container config-reloader ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container grafana ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container prometheus ready: true, restart count 1
Nov 13 03:48:35.323: INFO: pod-client started at 2021-11-13 03:48:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container pod-client ready: false, restart count 0
Nov 13 03:48:35.323: INFO: e2e-net-exec started at 2021-11-13 03:48:03 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container e2e-net-exec ready: true, restart count 0
Nov 13 03:48:35.323: INFO: up-down-1-74sqh started at 2021-11-13 03:48:10 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container up-down-1 ready: true, restart count 0
Nov 13 03:48:35.323: INFO: service-headless-2wn8b started at 2021-11-13 03:47:05 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container service-headless ready: false, restart count 0
Nov 13 03:48:35.323: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:48:35.323: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:48:35.323: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:48:35.323: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 13 03:48:35.323: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:48:35.323: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:48:35.323: INFO: test-container-pod started at 2021-11-13 03:46:58 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container webserver ready: true, restart count 0
Nov 13 03:48:35.323: INFO: up-down-1-jqxgs started at 2021-11-13 03:48:10 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container up-down-1 ready: true, restart count 0
Nov 13 03:48:35.323: INFO: netserver-0 started at 2021-11-13 03:48:22 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.323: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.323: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov 13 03:48:35.323: INFO: no-snat-test8kq54 started at 2021-11-13 03:48:06 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container no-snat-test ready: true, restart count 0
Nov 13 03:48:35.323: INFO: up-down-1-4klr5 started at 2021-11-13 03:48:10 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container up-down-1 ready: true, restart count 0
Nov 13 03:48:35.323: INFO: verify-service-up-host-exec-pod started at 2021-11-13 03:48:31 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.323: INFO: 	Container agnhost-container ready: false, restart count 0
W1113 03:48:35.336694      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:48:35.849: INFO: 
Latency metrics for node node1
Nov 13 03:48:35.849: INFO: 
Logging node info for node node2
Nov 13 03:48:35.853: INFO: Node Info: &Node{ObjectMeta:{node2    652722dd-12b1-4529-ba4d-a00c590e4a68 145053 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 02:52:24 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:33 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:33 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:48:33 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:48:33 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:48:35.854: INFO: 
Logging kubelet events for node node2
Nov 13 03:48:35.856: INFO: 
Logging pods the kubelet thinks is on node node2
Nov 13 03:48:35.874: INFO: netserver-1 started at 2021-11-13 03:48:24 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.874: INFO: echo-sourceip started at 2021-11-13 03:48:30 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container agnhost-container ready: false, restart count 0
Nov 13 03:48:35.874: INFO: host-test-container-pod started at 2021-11-13 03:46:58 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:48:35.874: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:48:35.874: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:48:35.874: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:48:35.874: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.874: INFO: netserver-1 started at 2021-11-13 03:46:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container webserver ready: true, restart count 0
Nov 13 03:48:35.874: INFO: execpodv94hr started at 2021-11-13 03:47:17 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:48:35.874: INFO: test-container-pod started at 2021-11-13 03:48:17 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.874: INFO: up-down-2-9d7hj started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:48:35.874: INFO: netserver-1 started at 2021-11-13 03:48:22 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.874: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:48:35.874: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:48:35.874: INFO: up-down-2-zcxcg started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:48:35.874: INFO: pod-server-2 started at 2021-11-13 03:48:18 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:48:35.874: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:48:35.874: INFO: 	Container init ready: false, restart count 0
Nov 13 03:48:35.874: INFO: 	Container install ready: false, restart count 0
Nov 13 03:48:35.874: INFO: up-down-2-zs6bq started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:48:35.874: INFO: boom-server started at 2021-11-13 03:47:34 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container boom-server ready: true, restart count 0
Nov 13 03:48:35.874: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov 13 03:48:35.874: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:48:35.874: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:48:35.874: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container tas-extender ready: true, restart count 0
Nov 13 03:48:35.874: INFO: netserver-1 started at 2021-11-13 03:47:55 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container webserver ready: false, restart count 0
Nov 13 03:48:35.874: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:48:35.874: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:48:35.874: INFO: nodeport-update-service-vkrtp started at 2021-11-13 03:47:05 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:48:35.874: INFO: nodeport-update-service-tqj5d started at 2021-11-13 03:47:05 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:48:35.874: INFO: e2e-net-server started at 2021-11-13 03:48:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container e2e-net-server ready: false, restart count 0
Nov 13 03:48:35.874: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.874: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:48:35.874: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:48:35.875: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:48:35.875: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:48:35.875: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.875: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:48:35.875: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:48:35.875: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
W1113 03:48:35.887688      36 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:48:36.545: INFO: 
Latency metrics for node node2
Nov 13 03:48:36.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-883" for this suite.


• Failure [120.220 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Nov 13 03:48:34.933: failed dialing endpoint, did not find expected responses... 
    Tries 34
    Command curl -g -q -s 'http://10.244.3.242:9080/dial?request=hostname&protocol=http&host=10.10.190.207&port=32384&tries=1'
    retrieved map[]
    expected map[node1:{} node2:{}]

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:25.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Nov 13 03:48:25.661: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:27.665: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:29.664: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Nov 13 03:48:29.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7133 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Nov 13 03:48:30.107: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Nov 13 03:48:30.107: INFO: stdout: "iptables"
Nov 13 03:48:30.107: INFO: proxyMode: iptables
Nov 13 03:48:30.114: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Nov 13 03:48:30.116: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-7133
Nov 13 03:48:30.121: INFO: sourceip-test cluster ip: 10.233.27.120
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Nov 13 03:48:30.138: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:32.141: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:34.143: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:36.142: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:38.142: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-7133 to expose endpoints map[echo-sourceip:[8080]]
Nov 13 03:48:38.149: INFO: successfully validated that service sourceip-test in namespace services-7133 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Nov 13 03:48:38.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Nov 13 03:48:40.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5cf96f7945\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 13 03:48:42.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372121, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63772372118, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5cf96f7945\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 13 03:48:44.165: INFO: Waiting up to 2m0s to get response from 10.233.27.120:8080
Nov 13 03:48:44.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7133 exec pause-pod-5cf96f7945-6xcl5 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.27.120:8080/clientip'
Nov 13 03:48:44.408: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.27.120:8080/clientip\n"
Nov 13 03:48:44.408: INFO: stdout: "10.244.4.141:53548"
STEP: Verifying the preserved source ip
Nov 13 03:48:44.408: INFO: Waiting up to 2m0s to get response from 10.233.27.120:8080
Nov 13 03:48:44.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-7133 exec pause-pod-5cf96f7945-mzpkb -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.27.120:8080/clientip'
Nov 13 03:48:44.846: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.27.120:8080/clientip\n"
Nov 13 03:48:44.846: INFO: stdout: "10.244.3.27:35020"
STEP: Verifying the preserved source ip
Nov 13 03:48:44.846: INFO: Deleting deployment
Nov 13 03:48:44.850: INFO: Cleaning up the echo server pod
Nov 13 03:48:44.856: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:44.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7133" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:19.247 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":7,"skipped":1079,"failed":0}
Nov 13 03:48:44.875: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:22.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-5659
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:48:22.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:48:22.840: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:24.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:26.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:28.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:30.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:32.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:34.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:36.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:38.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:40.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:42.846: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:48:42.851: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:48:46.873: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:48:46.873: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:48:46.882: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:46.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5659" for this suite.


S [SKIPPING] [24.335 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov 13 03:48:46.894: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:47:34.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Nov 13 03:47:34.143: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:36.148: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:38.146: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:40.148: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:42.149: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Nov 13 03:47:42.169: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:44.174: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:46.173: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:47:48.171: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Nov 13 03:48:48.229: INFO: boom-server pod logs: 2021/11/13 03:47:40 external ip: 10.244.4.116
2021/11/13 03:47:40 listen on 0.0.0.0:9000
2021/11/13 03:47:40 probing 10.244.4.116
2021/11/13 03:47:47 tcp packet: &{SrcPort:43177 DestPort:9000 Seq:3272825330 Ack:0 Flags:40962 WindowSize:29200 Checksum:17234 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:47 tcp packet: &{SrcPort:43177 DestPort:9000 Seq:3272825331 Ack:1936435840 Flags:32784 WindowSize:229 Checksum:14610 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:47 connection established
2021/11/13 03:47:47 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 168 169 115 106 35 224 195 19 89 243 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:47 checksumer: &{sum:501210 oddByte:33 length:39}
2021/11/13 03:47:47 ret:  501243
2021/11/13 03:47:47 ret:  42498
2021/11/13 03:47:47 ret:  42498
2021/11/13 03:47:47 boom packet injected
2021/11/13 03:47:47 tcp packet: &{SrcPort:43177 DestPort:9000 Seq:3272825331 Ack:1936435840 Flags:32785 WindowSize:229 Checksum:14609 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:49 tcp packet: &{SrcPort:42861 DestPort:9000 Seq:3105684496 Ack:0 Flags:40962 WindowSize:29200 Checksum:42134 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:49 tcp packet: &{SrcPort:42861 DestPort:9000 Seq:3105684497 Ack:949291855 Flags:32784 WindowSize:229 Checksum:27790 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:49 connection established
2021/11/13 03:47:49 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 167 109 56 147 132 175 185 28 252 17 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:49 checksumer: &{sum:428440 oddByte:33 length:39}
2021/11/13 03:47:49 ret:  428473
2021/11/13 03:47:49 ret:  35263
2021/11/13 03:47:49 ret:  35263
2021/11/13 03:47:49 boom packet injected
2021/11/13 03:47:49 tcp packet: &{SrcPort:42861 DestPort:9000 Seq:3105684497 Ack:949291855 Flags:32785 WindowSize:229 Checksum:27789 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:51 tcp packet: &{SrcPort:34188 DestPort:9000 Seq:3966302516 Ack:0 Flags:40962 WindowSize:29200 Checksum:36406 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:51 tcp packet: &{SrcPort:34188 DestPort:9000 Seq:3966302517 Ack:1828170398 Flags:32784 WindowSize:229 Checksum:31403 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:51 connection established
2021/11/13 03:47:51 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 133 140 108 246 35 254 236 104 249 53 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:51 checksumer: &{sum:510585 oddByte:33 length:39}
2021/11/13 03:47:51 ret:  510618
2021/11/13 03:47:51 ret:  51873
2021/11/13 03:47:51 ret:  51873
2021/11/13 03:47:51 boom packet injected
2021/11/13 03:47:51 tcp packet: &{SrcPort:34188 DestPort:9000 Seq:3966302517 Ack:1828170398 Flags:32785 WindowSize:229 Checksum:31402 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:53 tcp packet: &{SrcPort:35515 DestPort:9000 Seq:4121300331 Ack:0 Flags:40962 WindowSize:29200 Checksum:25539 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:53 tcp packet: &{SrcPort:35515 DestPort:9000 Seq:4121300332 Ack:2412071396 Flags:32784 WindowSize:229 Checksum:34388 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:53 connection established
2021/11/13 03:47:53 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 138 187 143 195 195 68 245 166 13 108 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:53 checksumer: &{sum:491870 oddByte:33 length:39}
2021/11/13 03:47:53 ret:  491903
2021/11/13 03:47:53 ret:  33158
2021/11/13 03:47:53 ret:  33158
2021/11/13 03:47:53 boom packet injected
2021/11/13 03:47:53 tcp packet: &{SrcPort:35515 DestPort:9000 Seq:4121300332 Ack:2412071396 Flags:32785 WindowSize:229 Checksum:34387 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:55 tcp packet: &{SrcPort:37701 DestPort:9000 Seq:3534470007 Ack:0 Flags:40962 WindowSize:29200 Checksum:51286 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:55 tcp packet: &{SrcPort:37701 DestPort:9000 Seq:3534470008 Ack:442040882 Flags:32784 WindowSize:229 Checksum:41013 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:55 connection established
2021/11/13 03:47:55 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 147 69 26 87 123 146 210 171 187 120 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:55 checksumer: &{sum:458293 oddByte:33 length:39}
2021/11/13 03:47:55 ret:  458326
2021/11/13 03:47:55 ret:  65116
2021/11/13 03:47:55 ret:  65116
2021/11/13 03:47:55 boom packet injected
2021/11/13 03:47:55 tcp packet: &{SrcPort:37701 DestPort:9000 Seq:3534470008 Ack:442040882 Flags:32785 WindowSize:229 Checksum:41012 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:57 tcp packet: &{SrcPort:43177 DestPort:9000 Seq:3272825332 Ack:1936435841 Flags:32784 WindowSize:229 Checksum:60141 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:57 tcp packet: &{SrcPort:42355 DestPort:9000 Seq:1952939587 Ack:0 Flags:40962 WindowSize:29200 Checksum:18897 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:57 tcp packet: &{SrcPort:42355 DestPort:9000 Seq:1952939588 Ack:1372441745 Flags:32784 WindowSize:229 Checksum:6154 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:57 connection established
2021/11/13 03:47:57 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 165 115 81 204 69 241 116 103 126 68 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:57 checksumer: &{sum:493485 oddByte:33 length:39}
2021/11/13 03:47:57 ret:  493518
2021/11/13 03:47:57 ret:  34773
2021/11/13 03:47:57 ret:  34773
2021/11/13 03:47:57 boom packet injected
2021/11/13 03:47:57 tcp packet: &{SrcPort:42355 DestPort:9000 Seq:1952939588 Ack:1372441745 Flags:32785 WindowSize:229 Checksum:6153 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:59 tcp packet: &{SrcPort:42861 DestPort:9000 Seq:3105684498 Ack:949291856 Flags:32784 WindowSize:229 Checksum:7788 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:59 tcp packet: &{SrcPort:38759 DestPort:9000 Seq:3680271903 Ack:0 Flags:40962 WindowSize:29200 Checksum:59705 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:47:59 tcp packet: &{SrcPort:38759 DestPort:9000 Seq:3680271904 Ack:1976084162 Flags:32784 WindowSize:229 Checksum:45430 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:47:59 connection established
2021/11/13 03:47:59 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 151 103 117 199 32 34 219 92 126 32 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:47:59 checksumer: &{sum:424197 oddByte:33 length:39}
2021/11/13 03:47:59 ret:  424230
2021/11/13 03:47:59 ret:  31020
2021/11/13 03:47:59 ret:  31020
2021/11/13 03:47:59 boom packet injected
2021/11/13 03:47:59 tcp packet: &{SrcPort:38759 DestPort:9000 Seq:3680271904 Ack:1976084162 Flags:32785 WindowSize:229 Checksum:45429 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:01 tcp packet: &{SrcPort:34188 DestPort:9000 Seq:3966302518 Ack:1828170399 Flags:32784 WindowSize:229 Checksum:11401 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:01 tcp packet: &{SrcPort:44073 DestPort:9000 Seq:4042148536 Ack:0 Flags:40962 WindowSize:29200 Checksum:60028 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:01 tcp packet: &{SrcPort:44073 DestPort:9000 Seq:4042148537 Ack:3835476758 Flags:32784 WindowSize:229 Checksum:10177 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:01 connection established
2021/11/13 03:48:01 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 172 41 228 155 52 118 240 238 74 185 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:01 checksumer: &{sum:495230 oddByte:33 length:39}
2021/11/13 03:48:01 ret:  495263
2021/11/13 03:48:01 ret:  36518
2021/11/13 03:48:01 ret:  36518
2021/11/13 03:48:01 boom packet injected
2021/11/13 03:48:01 tcp packet: &{SrcPort:44073 DestPort:9000 Seq:4042148537 Ack:3835476758 Flags:32785 WindowSize:229 Checksum:10176 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:03 tcp packet: &{SrcPort:35515 DestPort:9000 Seq:4121300333 Ack:2412071397 Flags:32784 WindowSize:229 Checksum:14384 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:03 tcp packet: &{SrcPort:46458 DestPort:9000 Seq:522470745 Ack:0 Flags:40962 WindowSize:29200 Checksum:45186 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:03 tcp packet: &{SrcPort:46458 DestPort:9000 Seq:522470746 Ack:2663356012 Flags:32784 WindowSize:229 Checksum:20603 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:03 connection established
2021/11/13 03:48:03 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 181 122 158 190 15 204 31 36 69 90 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:03 checksumer: &{sum:470598 oddByte:33 length:39}
2021/11/13 03:48:03 ret:  470631
2021/11/13 03:48:03 ret:  11886
2021/11/13 03:48:03 ret:  11886
2021/11/13 03:48:03 boom packet injected
2021/11/13 03:48:03 tcp packet: &{SrcPort:46458 DestPort:9000 Seq:522470746 Ack:2663356012 Flags:32785 WindowSize:229 Checksum:20602 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:05 tcp packet: &{SrcPort:37701 DestPort:9000 Seq:3534470009 Ack:442040883 Flags:32784 WindowSize:229 Checksum:21009 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:05 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:4125902519 Ack:0 Flags:40962 WindowSize:29200 Checksum:53324 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:05 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:4125902520 Ack:241837749 Flags:32784 WindowSize:229 Checksum:26756 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:05 connection established
2021/11/13 03:48:05 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 181 187 14 104 160 21 245 236 70 184 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:05 checksumer: &{sum:493854 oddByte:33 length:39}
2021/11/13 03:48:05 ret:  493887
2021/11/13 03:48:05 ret:  35142
2021/11/13 03:48:05 ret:  35142
2021/11/13 03:48:05 boom packet injected
2021/11/13 03:48:05 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:4125902520 Ack:241837749 Flags:32785 WindowSize:229 Checksum:26755 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:07 tcp packet: &{SrcPort:42355 DestPort:9000 Seq:1952939589 Ack:1372441746 Flags:32784 WindowSize:229 Checksum:51687 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:07 tcp packet: &{SrcPort:45581 DestPort:9000 Seq:3898329386 Ack:0 Flags:40962 WindowSize:29200 Checksum:22344 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:07 tcp packet: &{SrcPort:45581 DestPort:9000 Seq:3898329387 Ack:883833300 Flags:32784 WindowSize:229 Checksum:44620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:07 connection established
2021/11/13 03:48:07 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 178 13 52 172 179 52 232 91 201 43 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:07 checksumer: &{sum:401610 oddByte:33 length:39}
2021/11/13 03:48:07 ret:  401643
2021/11/13 03:48:07 ret:  8433
2021/11/13 03:48:07 ret:  8433
2021/11/13 03:48:07 boom packet injected
2021/11/13 03:48:07 tcp packet: &{SrcPort:45581 DestPort:9000 Seq:3898329387 Ack:883833300 Flags:32785 WindowSize:229 Checksum:44619 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:09 tcp packet: &{SrcPort:38759 DestPort:9000 Seq:3680271905 Ack:1976084163 Flags:32784 WindowSize:229 Checksum:25428 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:09 tcp packet: &{SrcPort:41003 DestPort:9000 Seq:930781119 Ack:0 Flags:40962 WindowSize:29200 Checksum:17319 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:09 tcp packet: &{SrcPort:41003 DestPort:9000 Seq:930781120 Ack:4254652605 Flags:32784 WindowSize:229 Checksum:11015 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:09 connection established
2021/11/13 03:48:09 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 160 43 253 151 82 29 55 122 151 192 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:09 checksumer: &{sum:443965 oddByte:33 length:39}
2021/11/13 03:48:09 ret:  443998
2021/11/13 03:48:09 ret:  50788
2021/11/13 03:48:09 ret:  50788
2021/11/13 03:48:09 boom packet injected
2021/11/13 03:48:09 tcp packet: &{SrcPort:41003 DestPort:9000 Seq:930781120 Ack:4254652605 Flags:32785 WindowSize:229 Checksum:11014 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:11 tcp packet: &{SrcPort:44073 DestPort:9000 Seq:4042148538 Ack:3835476759 Flags:32784 WindowSize:229 Checksum:55671 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:11 tcp packet: &{SrcPort:39434 DestPort:9000 Seq:1241650052 Ack:0 Flags:40962 WindowSize:29200 Checksum:45957 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:11 tcp packet: &{SrcPort:39434 DestPort:9000 Seq:1241650053 Ack:3713782275 Flags:32784 WindowSize:229 Checksum:47590 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:11 connection established
2021/11/13 03:48:11 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 154 10 221 90 75 99 74 2 19 133 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:11 checksumer: &{sum:391839 oddByte:33 length:39}
2021/11/13 03:48:11 ret:  391872
2021/11/13 03:48:11 ret:  64197
2021/11/13 03:48:11 ret:  64197
2021/11/13 03:48:11 boom packet injected
2021/11/13 03:48:11 tcp packet: &{SrcPort:39434 DestPort:9000 Seq:1241650053 Ack:3713782275 Flags:32785 WindowSize:229 Checksum:47589 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:13 tcp packet: &{SrcPort:43155 DestPort:9000 Seq:231528173 Ack:0 Flags:40962 WindowSize:29200 Checksum:5662 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:13 tcp packet: &{SrcPort:43155 DestPort:9000 Seq:231528174 Ack:3950490880 Flags:32784 WindowSize:229 Checksum:9147 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:13 connection established
2021/11/13 03:48:13 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 168 147 235 118 46 96 13 204 214 238 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:13 checksumer: &{sum:512036 oddByte:33 length:39}
2021/11/13 03:48:13 ret:  512069
2021/11/13 03:48:13 ret:  53324
2021/11/13 03:48:13 ret:  53324
2021/11/13 03:48:13 boom packet injected
2021/11/13 03:48:13 tcp packet: &{SrcPort:43155 DestPort:9000 Seq:231528174 Ack:3950490880 Flags:32785 WindowSize:229 Checksum:9146 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:13 tcp packet: &{SrcPort:46458 DestPort:9000 Seq:522470747 Ack:2663356013 Flags:32784 WindowSize:229 Checksum:601 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:15 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:4125902521 Ack:241837750 Flags:32784 WindowSize:229 Checksum:6752 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:15 tcp packet: &{SrcPort:35067 DestPort:9000 Seq:4230720887 Ack:0 Flags:40962 WindowSize:29200 Checksum:26876 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:15 tcp packet: &{SrcPort:35067 DestPort:9000 Seq:4230720888 Ack:726011054 Flags:32784 WindowSize:229 Checksum:55116 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:15 connection established
2021/11/13 03:48:15 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 136 251 43 68 134 14 252 43 173 120 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:15 checksumer: &{sum:433506 oddByte:33 length:39}
2021/11/13 03:48:15 ret:  433539
2021/11/13 03:48:15 ret:  40329
2021/11/13 03:48:15 ret:  40329
2021/11/13 03:48:15 boom packet injected
2021/11/13 03:48:15 tcp packet: &{SrcPort:35067 DestPort:9000 Seq:4230720888 Ack:726011054 Flags:32785 WindowSize:229 Checksum:55115 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:17 tcp packet: &{SrcPort:45581 DestPort:9000 Seq:3898329388 Ack:883833301 Flags:32784 WindowSize:229 Checksum:24618 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:17 tcp packet: &{SrcPort:36362 DestPort:9000 Seq:1311858123 Ack:0 Flags:40962 WindowSize:29200 Checksum:22979 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:17 tcp packet: &{SrcPort:36362 DestPort:9000 Seq:1311858124 Ack:931877191 Flags:32784 WindowSize:229 Checksum:28516 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:17 connection established
2021/11/13 03:48:17 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 142 10 55 137 202 167 78 49 93 204 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:17 checksumer: &{sum:451514 oddByte:33 length:39}
2021/11/13 03:48:17 ret:  451547
2021/11/13 03:48:17 ret:  58337
2021/11/13 03:48:17 ret:  58337
2021/11/13 03:48:17 boom packet injected
2021/11/13 03:48:17 tcp packet: &{SrcPort:36362 DestPort:9000 Seq:1311858124 Ack:931877191 Flags:32785 WindowSize:229 Checksum:28515 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:19 tcp packet: &{SrcPort:41003 DestPort:9000 Seq:930781121 Ack:4254652606 Flags:32784 WindowSize:229 Checksum:56547 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:20 tcp packet: &{SrcPort:39500 DestPort:9000 Seq:2966068162 Ack:0 Flags:40962 WindowSize:29200 Checksum:42092 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:20 tcp packet: &{SrcPort:39500 DestPort:9000 Seq:2966068163 Ack:1532451380 Flags:32784 WindowSize:229 Checksum:34001 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:20 connection established
2021/11/13 03:48:20 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 154 76 91 85 211 148 176 202 155 195 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:20 checksumer: &{sum:487315 oddByte:33 length:39}
2021/11/13 03:48:20 ret:  487348
2021/11/13 03:48:20 ret:  28603
2021/11/13 03:48:20 ret:  28603
2021/11/13 03:48:20 boom packet injected
2021/11/13 03:48:20 tcp packet: &{SrcPort:39500 DestPort:9000 Seq:2966068163 Ack:1532451380 Flags:32785 WindowSize:229 Checksum:34000 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:21 tcp packet: &{SrcPort:39434 DestPort:9000 Seq:1241650054 Ack:3713782276 Flags:32784 WindowSize:229 Checksum:27586 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:21 tcp packet: &{SrcPort:44486 DestPort:9000 Seq:1517183470 Ack:0 Flags:40962 WindowSize:29200 Checksum:6536 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:21 tcp packet: &{SrcPort:44486 DestPort:9000 Seq:1517183471 Ack:3334518943 Flags:32784 WindowSize:229 Checksum:10365 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:21 connection established
2021/11/13 03:48:21 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 173 198 198 191 49 255 90 110 97 239 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:21 checksumer: &{sum:560607 oddByte:33 length:39}
2021/11/13 03:48:21 ret:  560640
2021/11/13 03:48:21 ret:  36360
2021/11/13 03:48:21 ret:  36360
2021/11/13 03:48:21 boom packet injected
2021/11/13 03:48:21 tcp packet: &{SrcPort:44486 DestPort:9000 Seq:1517183471 Ack:3334518943 Flags:32785 WindowSize:229 Checksum:10364 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:23 tcp packet: &{SrcPort:43155 DestPort:9000 Seq:231528175 Ack:3950490881 Flags:32784 WindowSize:229 Checksum:54680 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:23 tcp packet: &{SrcPort:39237 DestPort:9000 Seq:3867744 Ack:0 Flags:40962 WindowSize:29200 Checksum:56825 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:23 tcp packet: &{SrcPort:39237 DestPort:9000 Seq:3867745 Ack:490590599 Flags:32784 WindowSize:229 Checksum:30137 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:23 connection established
2021/11/13 03:48:23 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 153 69 29 60 74 231 0 59 4 97 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:23 checksumer: &{sum:438148 oddByte:33 length:39}
2021/11/13 03:48:23 ret:  438181
2021/11/13 03:48:23 ret:  44971
2021/11/13 03:48:23 ret:  44971
2021/11/13 03:48:23 boom packet injected
2021/11/13 03:48:23 tcp packet: &{SrcPort:39237 DestPort:9000 Seq:3867745 Ack:490590599 Flags:32785 WindowSize:229 Checksum:30136 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:25 tcp packet: &{SrcPort:35067 DestPort:9000 Seq:4230720889 Ack:726011055 Flags:32784 WindowSize:229 Checksum:35113 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:25 tcp packet: &{SrcPort:34024 DestPort:9000 Seq:675195366 Ack:0 Flags:40962 WindowSize:29200 Checksum:7421 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:25 tcp packet: &{SrcPort:34024 DestPort:9000 Seq:675195367 Ack:3840970818 Flags:32784 WindowSize:229 Checksum:9854 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:25 connection established
2021/11/13 03:48:25 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 132 232 228 239 9 162 40 62 169 231 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:25 checksumer: &{sum:543426 oddByte:33 length:39}
2021/11/13 03:48:25 ret:  543459
2021/11/13 03:48:25 ret:  19179
2021/11/13 03:48:25 ret:  19179
2021/11/13 03:48:25 boom packet injected
2021/11/13 03:48:25 tcp packet: &{SrcPort:34024 DestPort:9000 Seq:675195367 Ack:3840970818 Flags:32785 WindowSize:229 Checksum:9853 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:27 tcp packet: &{SrcPort:36362 DestPort:9000 Seq:1311858125 Ack:931877192 Flags:32784 WindowSize:229 Checksum:8513 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:27 tcp packet: &{SrcPort:34840 DestPort:9000 Seq:788105643 Ack:0 Flags:40962 WindowSize:29200 Checksum:11132 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:27 tcp packet: &{SrcPort:34840 DestPort:9000 Seq:788105644 Ack:3147272569 Flags:32784 WindowSize:229 Checksum:21838 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:27 connection established
2021/11/13 03:48:27 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 136 24 187 150 10 217 46 249 137 172 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:27 checksumer: &{sum:514180 oddByte:33 length:39}
2021/11/13 03:48:27 ret:  514213
2021/11/13 03:48:27 ret:  55468
2021/11/13 03:48:27 ret:  55468
2021/11/13 03:48:27 boom packet injected
2021/11/13 03:48:27 tcp packet: &{SrcPort:34840 DestPort:9000 Seq:788105644 Ack:3147272569 Flags:32785 WindowSize:229 Checksum:21837 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:29 tcp packet: &{SrcPort:35192 DestPort:9000 Seq:3974049887 Ack:0 Flags:40962 WindowSize:29200 Checksum:48556 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:29 tcp packet: &{SrcPort:35192 DestPort:9000 Seq:3974049888 Ack:715649660 Flags:32784 WindowSize:229 Checksum:3989 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:29 connection established
2021/11/13 03:48:29 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 137 120 42 166 107 220 236 223 48 96 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:29 checksumer: &{sum:517562 oddByte:33 length:39}
2021/11/13 03:48:29 ret:  517595
2021/11/13 03:48:29 ret:  58850
2021/11/13 03:48:29 ret:  58850
2021/11/13 03:48:29 boom packet injected
2021/11/13 03:48:30 tcp packet: &{SrcPort:35192 DestPort:9000 Seq:3974049888 Ack:715649660 Flags:32785 WindowSize:229 Checksum:3984 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:30 tcp packet: &{SrcPort:39500 DestPort:9000 Seq:2966068164 Ack:1532451381 Flags:32784 WindowSize:229 Checksum:13997 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:31 tcp packet: &{SrcPort:44486 DestPort:9000 Seq:1517183472 Ack:3334518944 Flags:32784 WindowSize:229 Checksum:55896 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:31 tcp packet: &{SrcPort:33951 DestPort:9000 Seq:2350609528 Ack:0 Flags:40962 WindowSize:29200 Checksum:55140 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:31 tcp packet: &{SrcPort:33951 DestPort:9000 Seq:2350609529 Ack:3121013029 Flags:32784 WindowSize:229 Checksum:41850 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:31 connection established
2021/11/13 03:48:31 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 132 159 186 5 90 133 140 27 116 121 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:31 checksumer: &{sum:420376 oddByte:33 length:39}
2021/11/13 03:48:31 ret:  420409
2021/11/13 03:48:31 ret:  27199
2021/11/13 03:48:31 ret:  27199
2021/11/13 03:48:31 boom packet injected
2021/11/13 03:48:31 tcp packet: &{SrcPort:33951 DestPort:9000 Seq:2350609529 Ack:3121013029 Flags:32785 WindowSize:229 Checksum:41849 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:33 tcp packet: &{SrcPort:39237 DestPort:9000 Seq:3867746 Ack:490590600 Flags:32784 WindowSize:229 Checksum:10135 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:33 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:2899641831 Ack:0 Flags:40962 WindowSize:29200 Checksum:8510 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:33 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:2899641832 Ack:2269796055 Flags:32784 WindowSize:229 Checksum:41614 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:33 connection established
2021/11/13 03:48:33 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 128 205 135 72 208 55 172 213 5 232 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:33 checksumer: &{sum:505352 oddByte:33 length:39}
2021/11/13 03:48:33 ret:  505385
2021/11/13 03:48:33 ret:  46640
2021/11/13 03:48:33 ret:  46640
2021/11/13 03:48:33 boom packet injected
2021/11/13 03:48:33 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:2899641832 Ack:2269796055 Flags:32785 WindowSize:229 Checksum:41613 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:35 tcp packet: &{SrcPort:34024 DestPort:9000 Seq:675195368 Ack:3840970819 Flags:32784 WindowSize:229 Checksum:55385 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:35 tcp packet: &{SrcPort:41961 DestPort:9000 Seq:3368401080 Ack:0 Flags:40962 WindowSize:29200 Checksum:10127 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:35 tcp packet: &{SrcPort:41961 DestPort:9000 Seq:3368401081 Ack:300080466 Flags:32784 WindowSize:229 Checksum:36859 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:35 connection established
2021/11/13 03:48:35 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 163 233 17 225 86 178 200 197 184 185 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:35 checksumer: &{sum:567050 oddByte:33 length:39}
2021/11/13 03:48:35 ret:  567083
2021/11/13 03:48:35 ret:  42803
2021/11/13 03:48:35 ret:  42803
2021/11/13 03:48:35 boom packet injected
2021/11/13 03:48:35 tcp packet: &{SrcPort:41961 DestPort:9000 Seq:3368401081 Ack:300080466 Flags:32785 WindowSize:229 Checksum:36858 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:37 tcp packet: &{SrcPort:34840 DestPort:9000 Seq:788105645 Ack:3147272570 Flags:32784 WindowSize:229 Checksum:1836 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:37 tcp packet: &{SrcPort:41447 DestPort:9000 Seq:2168929533 Ack:0 Flags:40962 WindowSize:29200 Checksum:58617 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:37 tcp packet: &{SrcPort:41447 DestPort:9000 Seq:2168929534 Ack:2267795457 Flags:32784 WindowSize:229 Checksum:56732 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:37 connection established
2021/11/13 03:48:37 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 161 231 135 42 73 97 129 71 60 254 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:37 checksumer: &{sum:484270 oddByte:33 length:39}
2021/11/13 03:48:37 ret:  484303
2021/11/13 03:48:37 ret:  25558
2021/11/13 03:48:37 ret:  25558
2021/11/13 03:48:37 boom packet injected
2021/11/13 03:48:37 tcp packet: &{SrcPort:41447 DestPort:9000 Seq:2168929534 Ack:2267795457 Flags:32785 WindowSize:229 Checksum:56731 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:39 tcp packet: &{SrcPort:46242 DestPort:9000 Seq:726964998 Ack:0 Flags:40962 WindowSize:29200 Checksum:49752 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:39 tcp packet: &{SrcPort:46242 DestPort:9000 Seq:726964999 Ack:3576921236 Flags:32784 WindowSize:229 Checksum:47246 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:39 connection established
2021/11/13 03:48:39 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 180 162 213 49 245 244 43 84 155 7 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:39 checksumer: &{sum:446404 oddByte:33 length:39}
2021/11/13 03:48:39 ret:  446437
2021/11/13 03:48:39 ret:  53227
2021/11/13 03:48:39 ret:  53227
2021/11/13 03:48:39 boom packet injected
2021/11/13 03:48:39 tcp packet: &{SrcPort:46242 DestPort:9000 Seq:726964999 Ack:3576921236 Flags:32785 WindowSize:229 Checksum:47245 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:39 tcp packet: &{SrcPort:35192 DestPort:9000 Seq:3974049889 Ack:715649661 Flags:32784 WindowSize:229 Checksum:49522 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:41 tcp packet: &{SrcPort:33951 DestPort:9000 Seq:2350609530 Ack:3121013030 Flags:32784 WindowSize:229 Checksum:21846 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:41 tcp packet: &{SrcPort:42610 DestPort:9000 Seq:2235209213 Ack:0 Flags:40962 WindowSize:29200 Checksum:29658 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:41 tcp packet: &{SrcPort:42610 DestPort:9000 Seq:2235209214 Ack:4280622754 Flags:32784 WindowSize:229 Checksum:38465 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:41 connection established
2021/11/13 03:48:41 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 166 114 255 35 152 2 133 58 149 254 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:41 checksumer: &{sum:425175 oddByte:33 length:39}
2021/11/13 03:48:41 ret:  425208
2021/11/13 03:48:41 ret:  31998
2021/11/13 03:48:41 ret:  31998
2021/11/13 03:48:41 boom packet injected
2021/11/13 03:48:41 tcp packet: &{SrcPort:42610 DestPort:9000 Seq:2235209214 Ack:4280622754 Flags:32785 WindowSize:229 Checksum:38464 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:43 tcp packet: &{SrcPort:32973 DestPort:9000 Seq:2899641833 Ack:2269796056 Flags:32784 WindowSize:229 Checksum:21610 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:43 tcp packet: &{SrcPort:42458 DestPort:9000 Seq:756408701 Ack:0 Flags:40962 WindowSize:29200 Checksum:31047 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:43 tcp packet: &{SrcPort:42458 DestPort:9000 Seq:756408702 Ack:219326175 Flags:32784 WindowSize:229 Checksum:64948 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:43 connection established
2021/11/13 03:48:43 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 165 218 13 17 32 63 45 21 225 126 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:43 checksumer: &{sum:420192 oddByte:33 length:39}
2021/11/13 03:48:43 ret:  420225
2021/11/13 03:48:43 ret:  27015
2021/11/13 03:48:43 ret:  27015
2021/11/13 03:48:43 boom packet injected
2021/11/13 03:48:43 tcp packet: &{SrcPort:42458 DestPort:9000 Seq:756408702 Ack:219326175 Flags:32785 WindowSize:229 Checksum:64947 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:45 tcp packet: &{SrcPort:41961 DestPort:9000 Seq:3368401082 Ack:300080467 Flags:32784 WindowSize:229 Checksum:16857 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:45 tcp packet: &{SrcPort:42392 DestPort:9000 Seq:1801873684 Ack:0 Flags:40962 WindowSize:29200 Checksum:43984 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:45 tcp packet: &{SrcPort:42392 DestPort:9000 Seq:1801873685 Ack:1045784594 Flags:32784 WindowSize:229 Checksum:13815 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:45 connection established
2021/11/13 03:48:45 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 165 152 62 83 225 114 107 102 105 21 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:45 checksumer: &{sum:427288 oddByte:33 length:39}
2021/11/13 03:48:45 ret:  427321
2021/11/13 03:48:45 ret:  34111
2021/11/13 03:48:45 ret:  34111
2021/11/13 03:48:45 boom packet injected
2021/11/13 03:48:45 tcp packet: &{SrcPort:42392 DestPort:9000 Seq:1801873685 Ack:1045784594 Flags:32785 WindowSize:229 Checksum:13814 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:47 tcp packet: &{SrcPort:41447 DestPort:9000 Seq:2168929535 Ack:2267795458 Flags:32784 WindowSize:229 Checksum:36730 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:47 tcp packet: &{SrcPort:46011 DestPort:9000 Seq:3155410034 Ack:0 Flags:40962 WindowSize:29200 Checksum:61905 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.10
2021/11/13 03:48:47 tcp packet: &{SrcPort:46011 DestPort:9000 Seq:3155410035 Ack:3415698402 Flags:32784 WindowSize:229 Checksum:58133 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.10
2021/11/13 03:48:47 connection established
2021/11/13 03:48:47 calling checksumTCP: 10.244.4.116 10.244.3.10 [35 40 179 187 203 149 229 66 188 19 188 115 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/11/13 03:48:47 checksumer: &{sum:443995 oddByte:33 length:39}
2021/11/13 03:48:47 ret:  444028
2021/11/13 03:48:47 ret:  50818
2021/11/13 03:48:47 ret:  50818
2021/11/13 03:48:47 boom packet injected
2021/11/13 03:48:47 tcp packet: &{SrcPort:46011 DestPort:9000 Seq:3155410035 Ack:3415698402 Flags:32785 WindowSize:229 Checksum:58132 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.10

Nov 13 03:48:48.229: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:48.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-4268" for this suite.


• [SLOW TEST:74.134 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":315,"failed":0}
Nov 13 03:48:48.241: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:24.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-4051
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 13 03:48:24.417: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:48:24.449: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:26.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:28.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:30.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:32.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:34.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:36.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:38.452: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:40.453: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:42.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:44.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Nov 13 03:48:46.454: INFO: The status of Pod netserver-0 is Running (Ready = true)
Nov 13 03:48:46.459: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Nov 13 03:48:50.480: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Nov 13 03:48:50.480: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Nov 13 03:48:50.486: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:48:50.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4051" for this suite.


S [SKIPPING] [26.189 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Nov 13 03:48:50.499: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:47:05.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-2785
Nov 13 03:47:05.345: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-2785
I1113 03:47:05.359190      28 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-2785, replica count: 2
I1113 03:47:08.410644      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:47:11.412427      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:47:14.413331      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:47:17.415245      28 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 13 03:47:17.415: INFO: Creating new exec pod
Nov 13 03:47:26.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Nov 13 03:47:26.740: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-update-service 80\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Nov 13 03:47:26.740: INFO: stdout: "nodeport-update-service-vkrtp"
Nov 13 03:47:26.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.242 80'
Nov 13 03:47:26.981: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.34.242 80\nConnection to 10.233.34.242 80 port [tcp/http] succeeded!\n"
Nov 13 03:47:26.981: INFO: stdout: "nodeport-update-service-tqj5d"
Nov 13 03:47:26.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:28.269: INFO: rc: 1
Nov 13 03:47:28.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:29.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:29.548: INFO: rc: 1
Nov 13 03:47:29.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:30.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:30.524: INFO: rc: 1
Nov 13 03:47:30.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:31.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:31.529: INFO: rc: 1
Nov 13 03:47:31.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:32.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:32.527: INFO: rc: 1
Nov 13 03:47:32.527: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:33.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:33.781: INFO: rc: 1
Nov 13 03:47:33.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:34.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:34.565: INFO: rc: 1
Nov 13 03:47:34.565: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:35.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:35.747: INFO: rc: 1
Nov 13 03:47:35.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:36.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:36.575: INFO: rc: 1
Nov 13 03:47:36.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:37.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:37.528: INFO: rc: 1
Nov 13 03:47:37.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:38.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:38.499: INFO: rc: 1
Nov 13 03:47:38.499: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:39.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:39.916: INFO: rc: 1
Nov 13 03:47:39.916: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:40.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:40.960: INFO: rc: 1
Nov 13 03:47:40.960: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:41.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:41.540: INFO: rc: 1
Nov 13 03:47:41.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:42.667: INFO: rc: 1
Nov 13 03:47:42.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName+ 
nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:43.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:43.624: INFO: rc: 1
Nov 13 03:47:43.624: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:44.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:44.739: INFO: rc: 1
Nov 13 03:47:44.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:45.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:45.610: INFO: rc: 1
Nov 13 03:47:45.610: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:46.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:46.531: INFO: rc: 1
Nov 13 03:47:46.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:47.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:47.888: INFO: rc: 1
Nov 13 03:47:47.888: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:48.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:48.549: INFO: rc: 1
Nov 13 03:47:48.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:49.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:49.574: INFO: rc: 1
Nov 13 03:47:49.574: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:50.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:50.821: INFO: rc: 1
Nov 13 03:47:50.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:51.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:51.569: INFO: rc: 1
Nov 13 03:47:51.569: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:52.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:52.518: INFO: rc: 1
Nov 13 03:47:52.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:53.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:53.654: INFO: rc: 1
Nov 13 03:47:53.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:54.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:55.041: INFO: rc: 1
Nov 13 03:47:55.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:55.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:55.692: INFO: rc: 1
Nov 13 03:47:55.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:56.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:56.723: INFO: rc: 1
Nov 13 03:47:56.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:57.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:58.778: INFO: rc: 1
Nov 13 03:47:58.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:47:59.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:47:59.704: INFO: rc: 1
Nov 13 03:47:59.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:00.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:00.613: INFO: rc: 1
Nov 13 03:48:00.614: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:01.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:02.098: INFO: rc: 1
Nov 13 03:48:02.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:02.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:02.619: INFO: rc: 1
Nov 13 03:48:02.619: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:03.552: INFO: rc: 1
Nov 13 03:48:03.552: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:04.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:04.821: INFO: rc: 1
Nov 13 03:48:04.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:05.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:05.568: INFO: rc: 1
Nov 13 03:48:05.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:06.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:06.647: INFO: rc: 1
Nov 13 03:48:06.648: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:07.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:07.896: INFO: rc: 1
Nov 13 03:48:07.896: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:08.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:08.550: INFO: rc: 1
Nov 13 03:48:08.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:09.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:09.675: INFO: rc: 1
Nov 13 03:48:09.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:10.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:10.549: INFO: rc: 1
Nov 13 03:48:10.549: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:11.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:12.468: INFO: rc: 1
Nov 13 03:48:12.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:13.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:13.528: INFO: rc: 1
Nov 13 03:48:13.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:14.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:14.497: INFO: rc: 1
Nov 13 03:48:14.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:15.541: INFO: rc: 1
Nov 13 03:48:15.541: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:16.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:16.546: INFO: rc: 1
Nov 13 03:48:16.546: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:17.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:17.575: INFO: rc: 1
Nov 13 03:48:17.575: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:18.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:18.568: INFO: rc: 1
Nov 13 03:48:18.568: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:19.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:19.737: INFO: rc: 1
Nov 13 03:48:19.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:20.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:21.081: INFO: rc: 1
Nov 13 03:48:21.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:21.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:21.657: INFO: rc: 1
Nov 13 03:48:21.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:22.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:22.828: INFO: rc: 1
Nov 13 03:48:22.828: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:23.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:23.769: INFO: rc: 1
Nov 13 03:48:23.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:24.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:24.708: INFO: rc: 1
Nov 13 03:48:24.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:25.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:26.058: INFO: rc: 1
Nov 13 03:48:26.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:26.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:27.941: INFO: rc: 1
Nov 13 03:48:27.941: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:28.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:28.782: INFO: rc: 1
Nov 13 03:48:28.782: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:29.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:29.608: INFO: rc: 1
Nov 13 03:48:29.608: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:30.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:30.529: INFO: rc: 1
Nov 13 03:48:30.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:31.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:31.668: INFO: rc: 1
Nov 13 03:48:31.668: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:32.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:32.658: INFO: rc: 1
Nov 13 03:48:32.658: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:33.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:33.590: INFO: rc: 1
Nov 13 03:48:33.590: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30043
+ echo hostName
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:34.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:34.511: INFO: rc: 1
Nov 13 03:48:34.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:35.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:35.558: INFO: rc: 1
Nov 13 03:48:35.558: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:36.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:36.537: INFO: rc: 1
Nov 13 03:48:36.537: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:37.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:37.530: INFO: rc: 1
Nov 13 03:48:37.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:38.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:38.510: INFO: rc: 1
Nov 13 03:48:38.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:39.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:39.746: INFO: rc: 1
Nov 13 03:48:39.746: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:40.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:40.751: INFO: rc: 1
Nov 13 03:48:40.751: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:41.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:41.515: INFO: rc: 1
Nov 13 03:48:41.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:42.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:42.609: INFO: rc: 1
Nov 13 03:48:42.609: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ + ncecho -v hostName
 -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:43.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:43.664: INFO: rc: 1
Nov 13 03:48:43.664: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:44.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:44.520: INFO: rc: 1
Nov 13 03:48:44.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:45.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:45.738: INFO: rc: 1
Nov 13 03:48:45.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:46.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:46.519: INFO: rc: 1
Nov 13 03:48:46.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:47.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:47.564: INFO: rc: 1
Nov 13 03:48:47.564: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30043
+ echo hostName
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:48.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:48.510: INFO: rc: 1
Nov 13 03:48:48.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:49.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:49.525: INFO: rc: 1
Nov 13 03:48:49.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:50.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:50.504: INFO: rc: 1
Nov 13 03:48:50.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30043
+ echo hostName
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:51.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:51.518: INFO: rc: 1
Nov 13 03:48:51.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:52.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:52.544: INFO: rc: 1
Nov 13 03:48:52.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:53.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:53.583: INFO: rc: 1
Nov 13 03:48:53.583: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:54.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:54.517: INFO: rc: 1
Nov 13 03:48:54.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:55.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:55.545: INFO: rc: 1
Nov 13 03:48:55.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:56.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:56.595: INFO: rc: 1
Nov 13 03:48:56.595: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:57.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:58.041: INFO: rc: 1
Nov 13 03:48:58.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:58.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:58.517: INFO: rc: 1
Nov 13 03:48:58.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:48:59.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:48:59.542: INFO: rc: 1
Nov 13 03:48:59.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30043
+ echo hostName
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:00.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:00.542: INFO: rc: 1
Nov 13 03:49:00.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:01.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:01.509: INFO: rc: 1
Nov 13 03:49:01.509: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ + echonc -v hostName -t
 -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:02.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:02.524: INFO: rc: 1
Nov 13 03:49:02.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:03.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:03.571: INFO: rc: 1
Nov 13 03:49:03.571: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:04.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:04.516: INFO: rc: 1
Nov 13 03:49:04.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:05.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:05.525: INFO: rc: 1
Nov 13 03:49:05.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:06.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:06.529: INFO: rc: 1
Nov 13 03:49:06.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:07.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:07.516: INFO: rc: 1
Nov 13 03:49:07.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:08.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:08.511: INFO: rc: 1
Nov 13 03:49:08.511: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:09.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:09.515: INFO: rc: 1
Nov 13 03:49:09.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:10.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:10.525: INFO: rc: 1
Nov 13 03:49:10.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:11.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:11.526: INFO: rc: 1
Nov 13 03:49:11.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:12.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:12.504: INFO: rc: 1
Nov 13 03:49:12.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:13.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:13.524: INFO: rc: 1
Nov 13 03:49:13.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:14.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:14.502: INFO: rc: 1
Nov 13 03:49:14.502: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ nc -v -t -w 2 10.10.190.207 30043
+ echo hostName
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:15.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:15.556: INFO: rc: 1
Nov 13 03:49:15.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:16.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:16.496: INFO: rc: 1
Nov 13 03:49:16.496: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:17.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:17.528: INFO: rc: 1
Nov 13 03:49:17.528: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:18.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:18.518: INFO: rc: 1
Nov 13 03:49:18.518: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:19.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:19.554: INFO: rc: 1
Nov 13 03:49:19.554: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:20.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:20.532: INFO: rc: 1
Nov 13 03:49:20.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo+  hostName
nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:21.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:21.516: INFO: rc: 1
Nov 13 03:49:21.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:22.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:22.516: INFO: rc: 1
Nov 13 03:49:22.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:23.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:23.512: INFO: rc: 1
Nov 13 03:49:23.513: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:24.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:24.516: INFO: rc: 1
Nov 13 03:49:24.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:25.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:25.517: INFO: rc: 1
Nov 13 03:49:25.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:26.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:26.529: INFO: rc: 1
Nov 13 03:49:26.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:27.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:28.279: INFO: rc: 1
Nov 13 03:49:28.279: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:28.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043'
Nov 13 03:49:28.545: INFO: rc: 1
Nov 13 03:49:28.545: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2785 exec execpodv94hr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30043:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 10.10.190.207 30043
nc: connect to 10.10.190.207 port 30043 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Nov 13 03:49:28.545: FAIL: Unexpected error:
    <*errors.errorString | 0xc0038e8e10>: {
        s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30043 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30043 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001972600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001972600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001972600, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Nov 13 03:49:28.546: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-2785".
STEP: Found 17 events.
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:05 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-vkrtp
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:05 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-tqj5d
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:05 +0000 UTC - event for nodeport-update-service-tqj5d: {default-scheduler } Scheduled: Successfully assigned services-2785/nodeport-update-service-tqj5d to node2
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:05 +0000 UTC - event for nodeport-update-service-vkrtp: {default-scheduler } Scheduled: Successfully assigned services-2785/nodeport-update-service-vkrtp to node2
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:07 +0000 UTC - event for nodeport-update-service-tqj5d: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:07 +0000 UTC - event for nodeport-update-service-tqj5d: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 299.29518ms
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:08 +0000 UTC - event for nodeport-update-service-tqj5d: {kubelet node2} Created: Created container nodeport-update-service
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:09 +0000 UTC - event for nodeport-update-service-tqj5d: {kubelet node2} Started: Started container nodeport-update-service
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:09 +0000 UTC - event for nodeport-update-service-vkrtp: {kubelet node2} Created: Created container nodeport-update-service
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:09 +0000 UTC - event for nodeport-update-service-vkrtp: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:09 +0000 UTC - event for nodeport-update-service-vkrtp: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 350.292559ms
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:10 +0000 UTC - event for nodeport-update-service-vkrtp: {kubelet node2} Started: Started container nodeport-update-service
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:17 +0000 UTC - event for execpodv94hr: {default-scheduler } Scheduled: Successfully assigned services-2785/execpodv94hr to node2
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:19 +0000 UTC - event for execpodv94hr: {kubelet node2} Started: Started container agnhost-container
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:19 +0000 UTC - event for execpodv94hr: {kubelet node2} Created: Created container agnhost-container
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:19 +0000 UTC - event for execpodv94hr: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:49:28.572: INFO: At 2021-11-13 03:47:19 +0000 UTC - event for execpodv94hr: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 324.629999ms
Nov 13 03:49:28.575: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Nov 13 03:49:28.575: INFO: execpodv94hr                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:17 +0000 UTC  }]
Nov 13 03:49:28.575: INFO: nodeport-update-service-tqj5d  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:05 +0000 UTC  }]
Nov 13 03:49:28.575: INFO: nodeport-update-service-vkrtp  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:47:05 +0000 UTC  }]
Nov 13 03:49:28.575: INFO: 
Nov 13 03:49:28.579: INFO: 
Logging node info for node master1
Nov 13 03:49:28.581: INFO: Node Info: &Node{ObjectMeta:{master1    56d66c54-e52b-494a-a758-e4b658c4b245 145631 0 2021-11-12 21:05:50 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:22 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:22 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:22 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:22 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:28.582: INFO: 
Logging kubelet events for node master1
Nov 13 03:49:28.584: INFO: 
Logging pods the kubelet thinks is on node master1
Nov 13 03:49:28.603: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:28.603: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:49:28.603: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container docker-registry ready: true, restart count 0
Nov 13 03:49:28.603: INFO: 	Container nginx ready: true, restart count 0
Nov 13 03:49:28.603: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:28.603: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:28.603: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov 13 03:49:28.603: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:49:28.603: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:28.603: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:28.604: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:49:28.604: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.604: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:28.604: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.604: INFO: 	Container coredns ready: true, restart count 2
W1113 03:49:28.619421      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:28.691: INFO: 
Latency metrics for node master1
Nov 13 03:49:28.691: INFO: 
Logging node info for node master2
Nov 13 03:49:28.695: INFO: Node Info: &Node{ObjectMeta:{master2    9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 145602 0 2021-11-12 21:06:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:28.695: INFO: 
Logging kubelet events for node master2
Nov 13 03:49:28.699: INFO: 
Logging pods the kubelet thinks is on node master2
Nov 13 03:49:28.709: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:28.709: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container nfd-controller ready: true, restart count 0
Nov 13 03:49:28.709: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:49:28.709: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:49:28.709: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:49:28.709: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:28.709: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:49:28.709: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:28.709: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container coredns ready: true, restart count 1
Nov 13 03:49:28.709: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.709: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:28.709: INFO: 	Container node-exporter ready: true, restart count 0
W1113 03:49:28.724063      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:28.789: INFO: 
Latency metrics for node master2
Nov 13 03:49:28.789: INFO: 
Logging node info for node master3
Nov 13 03:49:28.794: INFO: Node Info: &Node{ObjectMeta:{master3    fce0cd54-e4d8-4ce1-b720-522aad2d7989 145601 0 2021-11-12 21:06:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:19 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:28.794: INFO: 
Logging kubelet events for node master3
Nov 13 03:49:28.796: INFO: 
Logging pods the kubelet thinks is on node master3
Nov 13 03:49:28.805: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:49:28.805: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:28.805: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:49:28.805: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:28.805: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container autoscaler ready: true, restart count 1
Nov 13 03:49:28.805: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:28.805: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov 13 03:49:28.805: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:49:28.805: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.805: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:28.805: INFO: 	Container node-exporter ready: true, restart count 0
W1113 03:49:28.818219      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:28.882: INFO: 
Latency metrics for node master3
Nov 13 03:49:28.882: INFO: 
Logging node info for node node1
Nov 13 03:49:28.885: INFO: Node Info: &Node{ObjectMeta:{node1    6ceb907c-9809-4d18-88c6-b1e10ba80f97 145607 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-13 01:56:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:20 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:20 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:20 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:20 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:28.887: INFO: 
Logging kubelet events for node node1
Nov 13 03:49:28.889: INFO: 
Logging pods the kubelet thinks is on node node1
Nov 13 03:49:28.960: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:49:28.960: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:49:28.960: INFO: startup-script started at 2021-11-13 03:47:42 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container startup-script ready: false, restart count 0
Nov 13 03:49:28.960: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:49:28.960: INFO: 	Container init ready: false, restart count 0
Nov 13 03:49:28.960: INFO: 	Container install ready: false, restart count 0
Nov 13 03:49:28.960: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:49:28.960: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov 13 03:49:28.960: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container config-reloader ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container grafana ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container prometheus ready: true, restart count 1
Nov 13 03:49:28.960: INFO: pod-client started at 2021-11-13 03:48:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container pod-client ready: true, restart count 0
Nov 13 03:49:28.960: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:49:28.960: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:49:28.960: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 13 03:49:28.960: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:28.960: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:49:28.960: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:28.960: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov 13 03:49:28.960: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:28.960: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:49:28.960: INFO: 	Container rbac-proxy ready: true, restart count 0
W1113 03:49:28.975859      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:29.226: INFO: 
Latency metrics for node node1
Nov 13 03:49:29.226: INFO: 
Logging node info for node node2
Nov 13 03:49:29.229: INFO: Node Info: &Node{ObjectMeta:{node2    652722dd-12b1-4529-ba4d-a00c590e4a68 145638 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 02:52:24 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:24 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:24 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:24 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:24 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:29.230: INFO: 
Logging kubelet events for node node2
Nov 13 03:49:29.233: INFO: 
Logging pods the kubelet thinks is on node node2
Nov 13 03:49:29.247: INFO: execpodv94hr started at 2021-11-13 03:47:17 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.247: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:49:29.247: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.247: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:49:29.247: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:49:29.248: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:49:29.248: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:49:29.248: INFO: up-down-2-9d7hj started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:29.248: INFO: up-down-2-zcxcg started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:29.248: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:49:29.248: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:49:29.248: INFO: pod-server-1 started at 2021-11-13 03:48:37 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:49:29.248: INFO: verify-service-up-host-exec-pod started at 2021-11-13 03:49:25 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:49:29.248: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:49:29.248: INFO: 	Container init ready: false, restart count 0
Nov 13 03:49:29.248: INFO: 	Container install ready: false, restart count 0
Nov 13 03:49:29.248: INFO: up-down-2-zs6bq started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:29.248: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container tas-extender ready: true, restart count 0
Nov 13 03:49:29.248: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov 13 03:49:29.248: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:29.248: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:29.248: INFO: nodeport-update-service-vkrtp started at 2021-11-13 03:47:05 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:49:29.248: INFO: nodeport-update-service-tqj5d started at 2021-11-13 03:47:05 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container nodeport-update-service ready: true, restart count 0
Nov 13 03:49:29.248: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:49:29.248: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:49:29.248: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:29.248: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov 13 03:49:29.248: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:49:29.248: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:29.248: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:49:29.248: INFO: 	Container kube-flannel ready: true, restart count 2
W1113 03:49:29.261488      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:29.449: INFO: 
Latency metrics for node node2
Nov 13 03:49:29.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2785" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [144.143 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Nov 13 03:49:28.545: Unexpected error:
      <*errors.errorString | 0xc0038e8e10>: {
          s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30043 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30043 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":0,"skipped":69,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Nov 13 03:49:29.469: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:21.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-3117
STEP: creating a client pod for probing the service svc-udp
Nov 13 03:48:21.685: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:23.689: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:25.689: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:27.690: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:29.688: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:31.690: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:33.688: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:35.688: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:37.690: INFO: The status of Pod pod-client is Running (Ready = true)
Nov 13 03:48:37.701: INFO: Pod client logs: Sat Nov 13 03:48:30 UTC 2021
Sat Nov 13 03:48:30 UTC 2021 Try: 1

Sat Nov 13 03:48:30 UTC 2021 Try: 2

Sat Nov 13 03:48:30 UTC 2021 Try: 3

Sat Nov 13 03:48:30 UTC 2021 Try: 4

Sat Nov 13 03:48:30 UTC 2021 Try: 5

Sat Nov 13 03:48:30 UTC 2021 Try: 6

Sat Nov 13 03:48:30 UTC 2021 Try: 7

Sat Nov 13 03:48:35 UTC 2021 Try: 8

Sat Nov 13 03:48:35 UTC 2021 Try: 9

Sat Nov 13 03:48:35 UTC 2021 Try: 10

Sat Nov 13 03:48:35 UTC 2021 Try: 11

Sat Nov 13 03:48:35 UTC 2021 Try: 12

Sat Nov 13 03:48:35 UTC 2021 Try: 13

STEP: creating a backend pod pod-server-1 for the service svc-udp
Nov 13 03:48:37.715: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:39.718: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:41.720: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3117 to expose endpoints map[pod-server-1:[80]]
Nov 13 03:48:41.729: INFO: successfully validated that service svc-udp in namespace conntrack-3117 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Nov 13 03:49:41.763: INFO: Pod client logs: Sat Nov 13 03:48:30 UTC 2021
Sat Nov 13 03:48:30 UTC 2021 Try: 1

Sat Nov 13 03:48:30 UTC 2021 Try: 2

Sat Nov 13 03:48:30 UTC 2021 Try: 3

Sat Nov 13 03:48:30 UTC 2021 Try: 4

Sat Nov 13 03:48:30 UTC 2021 Try: 5

Sat Nov 13 03:48:30 UTC 2021 Try: 6

Sat Nov 13 03:48:30 UTC 2021 Try: 7

Sat Nov 13 03:48:35 UTC 2021 Try: 8

Sat Nov 13 03:48:35 UTC 2021 Try: 9

Sat Nov 13 03:48:35 UTC 2021 Try: 10

Sat Nov 13 03:48:35 UTC 2021 Try: 11

Sat Nov 13 03:48:35 UTC 2021 Try: 12

Sat Nov 13 03:48:35 UTC 2021 Try: 13

Sat Nov 13 03:48:40 UTC 2021 Try: 14

Sat Nov 13 03:48:40 UTC 2021 Try: 15

Sat Nov 13 03:48:40 UTC 2021 Try: 16

Sat Nov 13 03:48:40 UTC 2021 Try: 17

Sat Nov 13 03:48:40 UTC 2021 Try: 18

Sat Nov 13 03:48:40 UTC 2021 Try: 19

Sat Nov 13 03:48:45 UTC 2021 Try: 20

Sat Nov 13 03:48:45 UTC 2021 Try: 21

Sat Nov 13 03:48:45 UTC 2021 Try: 22

Sat Nov 13 03:48:45 UTC 2021 Try: 23

Sat Nov 13 03:48:45 UTC 2021 Try: 24

Sat Nov 13 03:48:45 UTC 2021 Try: 25

Sat Nov 13 03:48:50 UTC 2021 Try: 26

Sat Nov 13 03:48:50 UTC 2021 Try: 27

Sat Nov 13 03:48:50 UTC 2021 Try: 28

Sat Nov 13 03:48:50 UTC 2021 Try: 29

Sat Nov 13 03:48:50 UTC 2021 Try: 30

Sat Nov 13 03:48:50 UTC 2021 Try: 31

Sat Nov 13 03:48:55 UTC 2021 Try: 32

Sat Nov 13 03:48:55 UTC 2021 Try: 33

Sat Nov 13 03:48:55 UTC 2021 Try: 34

Sat Nov 13 03:48:55 UTC 2021 Try: 35

Sat Nov 13 03:48:55 UTC 2021 Try: 36

Sat Nov 13 03:48:55 UTC 2021 Try: 37

Sat Nov 13 03:49:00 UTC 2021 Try: 38

Sat Nov 13 03:49:00 UTC 2021 Try: 39

Sat Nov 13 03:49:00 UTC 2021 Try: 40

Sat Nov 13 03:49:00 UTC 2021 Try: 41

Sat Nov 13 03:49:00 UTC 2021 Try: 42

Sat Nov 13 03:49:00 UTC 2021 Try: 43

Sat Nov 13 03:49:05 UTC 2021 Try: 44

Sat Nov 13 03:49:05 UTC 2021 Try: 45

Sat Nov 13 03:49:05 UTC 2021 Try: 46

Sat Nov 13 03:49:05 UTC 2021 Try: 47

Sat Nov 13 03:49:05 UTC 2021 Try: 48

Sat Nov 13 03:49:05 UTC 2021 Try: 49

Sat Nov 13 03:49:10 UTC 2021 Try: 50

Sat Nov 13 03:49:10 UTC 2021 Try: 51

Sat Nov 13 03:49:10 UTC 2021 Try: 52

Sat Nov 13 03:49:10 UTC 2021 Try: 53

Sat Nov 13 03:49:10 UTC 2021 Try: 54

Sat Nov 13 03:49:10 UTC 2021 Try: 55

Sat Nov 13 03:49:15 UTC 2021 Try: 56

Sat Nov 13 03:49:15 UTC 2021 Try: 57

Sat Nov 13 03:49:15 UTC 2021 Try: 58

Sat Nov 13 03:49:15 UTC 2021 Try: 59

Sat Nov 13 03:49:15 UTC 2021 Try: 60

Sat Nov 13 03:49:15 UTC 2021 Try: 61

Sat Nov 13 03:49:20 UTC 2021 Try: 62

Sat Nov 13 03:49:20 UTC 2021 Try: 63

Sat Nov 13 03:49:20 UTC 2021 Try: 64

Sat Nov 13 03:49:20 UTC 2021 Try: 65

Sat Nov 13 03:49:20 UTC 2021 Try: 66

Sat Nov 13 03:49:20 UTC 2021 Try: 67

Sat Nov 13 03:49:25 UTC 2021 Try: 68

Sat Nov 13 03:49:25 UTC 2021 Try: 69

Sat Nov 13 03:49:25 UTC 2021 Try: 70

Sat Nov 13 03:49:25 UTC 2021 Try: 71

Sat Nov 13 03:49:25 UTC 2021 Try: 72

Sat Nov 13 03:49:25 UTC 2021 Try: 73

Sat Nov 13 03:49:30 UTC 2021 Try: 74

Sat Nov 13 03:49:30 UTC 2021 Try: 75

Sat Nov 13 03:49:30 UTC 2021 Try: 76

Sat Nov 13 03:49:30 UTC 2021 Try: 77

Sat Nov 13 03:49:30 UTC 2021 Try: 78

Sat Nov 13 03:49:30 UTC 2021 Try: 79

Sat Nov 13 03:49:35 UTC 2021 Try: 80

Sat Nov 13 03:49:35 UTC 2021 Try: 81

Sat Nov 13 03:49:35 UTC 2021 Try: 82

Sat Nov 13 03:49:35 UTC 2021 Try: 83

Sat Nov 13 03:49:35 UTC 2021 Try: 84

Sat Nov 13 03:49:35 UTC 2021 Try: 85

Sat Nov 13 03:49:40 UTC 2021 Try: 86

Sat Nov 13 03:49:40 UTC 2021 Try: 87

Sat Nov 13 03:49:40 UTC 2021 Try: 88

Sat Nov 13 03:49:40 UTC 2021 Try: 89

Sat Nov 13 03:49:40 UTC 2021 Try: 90

Sat Nov 13 03:49:40 UTC 2021 Try: 91

Nov 13 03:49:41.763: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001932600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001932600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001932600, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-3117".
STEP: Found 8 events.
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:28 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:28 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 286.523309ms
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:29 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:30 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:39 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:40 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 297.685119ms
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:40 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Nov 13 03:49:41.768: INFO: At 2021-11-13 03:48:40 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Nov 13 03:49:41.770: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Nov 13 03:49:41.770: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:21 +0000 UTC  }]
Nov 13 03:49:41.770: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 03:48:37 +0000 UTC  }]
Nov 13 03:49:41.770: INFO: 
Nov 13 03:49:41.774: INFO: 
Logging node info for node master1
Nov 13 03:49:41.777: INFO: Node Info: &Node{ObjectMeta:{master1    56d66c54-e52b-494a-a758-e4b658c4b245 145690 0 2021-11-12 21:05:50 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:32 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:32 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:41.778: INFO: 
Logging kubelet events for node master1
Nov 13 03:49:41.780: INFO: 
Logging pods the kubelet thinks is on node master1
Nov 13 03:49:41.801: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container coredns ready: true, restart count 2
Nov 13 03:49:41.801: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:41.801: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:41.801: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-scheduler ready: true, restart count 0
Nov 13 03:49:41.801: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:49:41.801: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:41.801: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:49:41.801: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:41.801: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container docker-registry ready: true, restart count 0
Nov 13 03:49:41.801: INFO: 	Container nginx ready: true, restart count 0
Nov 13 03:49:41.801: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:41.801: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.801: INFO: 	Container kube-proxy ready: true, restart count 1
W1113 03:49:41.814147      22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:41.889: INFO: 
Latency metrics for node master1
Nov 13 03:49:41.889: INFO: 
Logging node info for node master2
Nov 13 03:49:41.891: INFO: Node Info: &Node{ObjectMeta:{master2    9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 145768 0 2021-11-12 21:06:20 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:41.892: INFO: 
Logging kubelet events for node master2
Nov 13 03:49:41.894: INFO: 
Logging pods the kubelet thinks is on node master2
Nov 13 03:49:41.903: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container coredns ready: true, restart count 1
Nov 13 03:49:41.903: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:41.903: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:41.903: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-controller-manager ready: true, restart count 2
Nov 13 03:49:41.903: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:49:41.903: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:49:41.903: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:41.903: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:49:41.903: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:41.903: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:41.903: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:41.903: INFO: 	Container nfd-controller ready: true, restart count 0
W1113 03:49:41.917507      22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:41.996: INFO: 
Latency metrics for node master2
Nov 13 03:49:41.996: INFO: 
Logging node info for node master3
Nov 13 03:49:41.998: INFO: Node Info: &Node{ObjectMeta:{master3    fce0cd54-e4d8-4ce1-b720-522aad2d7989 145767 0 2021-11-12 21:06:31 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:39 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:41.999: INFO: 
Logging kubelet events for node master3
Nov 13 03:49:42.000: INFO: 
Logging pods the kubelet thinks is on node master3
Nov 13 03:49:42.010: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-apiserver ready: true, restart count 0
Nov 13 03:49:42.010: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-controller-manager ready: true, restart count 3
Nov 13 03:49:42.010: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-scheduler ready: true, restart count 2
Nov 13 03:49:42.010: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.010: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:42.010: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:49:42.010: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Init container install-cni ready: true, restart count 0
Nov 13 03:49:42.010: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 13 03:49:42.010: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:42.010: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.010: INFO: 	Container autoscaler ready: true, restart count 1
W1113 03:49:42.025965      22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:42.088: INFO: 
Latency metrics for node master3
Nov 13 03:49:42.088: INFO: 
Logging node info for node node1
Nov 13 03:49:42.091: INFO: Node Info: &Node{ObjectMeta:{node1    6ceb907c-9809-4d18-88c6-b1e10ba80f97 145777 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-11-13 01:56:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:40 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:40 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:40 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:40 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:42.092: INFO: 
Logging kubelet events for node node1
Nov 13 03:49:42.093: INFO: 
Logging pods the kubelet thinks is on node node1
Nov 13 03:49:42.107: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:49:42.107: INFO: 	Container init ready: false, restart count 0
Nov 13 03:49:42.107: INFO: 	Container install ready: false, restart count 0
Nov 13 03:49:42.107: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:49:42.107: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container cmk-webhook ready: true, restart count 0
Nov 13 03:49:42.107: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container config-reloader ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container grafana ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container prometheus ready: true, restart count 1
Nov 13 03:49:42.107: INFO: pod-client started at 2021-11-13 03:48:21 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container pod-client ready: true, restart count 0
Nov 13 03:49:42.107: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:42.107: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:49:42.107: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:42.107: INFO: up-down-3-6gglc started at 2021-11-13 03:49:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container up-down-3 ready: false, restart count 0
Nov 13 03:49:42.107: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:49:42.107: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:49:42.107: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 13 03:49:42.107: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container prometheus-operator ready: true, restart count 0
Nov 13 03:49:42.107: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:49:42.107: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.107: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 13 03:49:42.107: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.107: INFO: 	Container kube-sriovdp ready: true, restart count 0
W1113 03:49:42.159537      22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:42.304: INFO: 
Latency metrics for node node1
Nov 13 03:49:42.304: INFO: 
Logging node info for node node2
Nov 13 03:49:42.307: INFO: Node Info: &Node{ObjectMeta:{node2    652722dd-12b1-4529-ba4d-a00c590e4a68 145692 0 2021-11-12 21:07:36 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 02:52:24 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 03:49:34 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 03:49:34 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Nov 13 03:49:42.308: INFO: 
Logging kubelet events for node node2
Nov 13 03:49:42.310: INFO: 
Logging pods the kubelet thinks is on node node2
Nov 13 03:49:42.331: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 13 03:49:42.331: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container kube-sriovdp ready: true, restart count 0
Nov 13 03:49:42.331: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 13 03:49:42.331: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Init container install-cni ready: true, restart count 2
Nov 13 03:49:42.331: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 13 03:49:42.331: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container kube-multus ready: true, restart count 1
Nov 13 03:49:42.331: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Nov 13 03:49:42.331: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container nfd-worker ready: true, restart count 0
Nov 13 03:49:42.331: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:42.331: INFO: 	Container collectd ready: true, restart count 0
Nov 13 03:49:42.331: INFO: 	Container collectd-exporter ready: true, restart count 0
Nov 13 03:49:42.331: INFO: 	Container rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.332: INFO: up-down-2-9d7hj started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:42.332: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container nodereport ready: true, restart count 0
Nov 13 03:49:42.332: INFO: 	Container reconcile ready: true, restart count 0
Nov 13 03:49:42.332: INFO: pod-server-1 started at 2021-11-13 03:48:37 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container agnhost-container ready: true, restart count 0
Nov 13 03:49:42.332: INFO: up-down-2-zcxcg started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:42.332: INFO: up-down-3-cbcwt started at 2021-11-13 03:49:38 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container up-down-3 ready: true, restart count 0
Nov 13 03:49:42.332: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container discover ready: false, restart count 0
Nov 13 03:49:42.332: INFO: 	Container init ready: false, restart count 0
Nov 13 03:49:42.332: INFO: 	Container install ready: false, restart count 0
Nov 13 03:49:42.332: INFO: up-down-2-zs6bq started at 2021-11-13 03:48:19 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container up-down-2 ready: true, restart count 0
Nov 13 03:49:42.332: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Nov 13 03:49:42.332: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Nov 13 03:49:42.332: INFO: 	Container node-exporter ready: true, restart count 0
Nov 13 03:49:42.332: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container tas-extender ready: true, restart count 0
Nov 13 03:49:42.332: INFO: up-down-3-6g7v4 started at 2021-11-13 03:49:38 +0000 UTC (0+1 container statuses recorded)
Nov 13 03:49:42.332: INFO: 	Container up-down-3 ready: true, restart count 0
W1113 03:49:42.346700      22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Nov 13 03:49:42.517: INFO: 
Latency metrics for node node2
Nov 13 03:49:42.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-3117" for this suite.


• Failure [80.890 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Nov 13 03:49:41.763: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":2,"skipped":458,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Nov 13 03:49:42.530: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Nov 13 03:48:10.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-8010
STEP: creating service up-down-1 in namespace services-8010
STEP: creating replication controller up-down-1 in namespace services-8010
I1113 03:48:10.515520      25 runners.go:190] Created replication controller with name: up-down-1, namespace: services-8010, replica count: 3
I1113 03:48:13.567270      25 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:48:16.567764      25 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:48:19.569121      25 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-8010
STEP: creating service up-down-2 in namespace services-8010
STEP: creating replication controller up-down-2 in namespace services-8010
I1113 03:48:19.583155      25 runners.go:190] Created replication controller with name: up-down-2, namespace: services-8010, replica count: 3
I1113 03:48:22.635096      25 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:48:25.636071      25 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:48:28.636733      25 runners.go:190] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:48:31.638246      25 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Nov 13 03:48:31.640: INFO: Creating new host exec pod
Nov 13 03:48:31.653: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:33.656: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:35.657: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:37.659: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:39.657: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:41.657: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:48:41.657: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:48:45.675: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.17.29:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-host-exec-pod
Nov 13 03:48:45.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.17.29:80 2>&1 || true; echo; done'
Nov 13 03:48:46.028: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n"
Nov 13 03:48:46.028: INFO: stdout: "up-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-74sqh\n"
Nov 13 03:48:46.029: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.17.29:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-exec-pod-cqcc8
Nov 13 03:48:46.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-exec-pod-cqcc8 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.17.29:80 2>&1 || true; echo; done'
Nov 13 03:48:46.408: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.17.29:80\n+ echo\n"
Nov 13 03:48:46.408: INFO: stdout: "up-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-74sqh\nup-down-1-jqxgs\nup-down-1-74sqh\nup-down-1-74sqh\nup-down-1-4klr5\nup-down-1-jqxgs\nup-down-1-jqxgs\nup-down-1-jqxgs\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8010
STEP: Deleting pod verify-service-up-exec-pod-cqcc8 in namespace services-8010
STEP: verifying service up-down-2 is up
Nov 13 03:48:46.425: INFO: Creating new host exec pod
Nov 13 03:48:46.437: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:48.440: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:50.441: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:52.442: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:54.441: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:56.442: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:48:58.440: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:00.441: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:02.443: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:49:02.443: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:49:06.463: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-host-exec-pod
Nov 13 03:49:06.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:06.837: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:06.838: INFO: stdout: "up-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\n"
Nov 13 03:49:06.838: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-exec-pod-dv57w
Nov 13 03:49:06.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-exec-pod-dv57w -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:07.244: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:07.245: INFO: stdout: "up-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8010
STEP: Deleting pod verify-service-up-exec-pod-dv57w in namespace services-8010
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-8010, will wait for the garbage collector to delete the pods
Nov 13 03:49:07.319: INFO: Deleting ReplicationController up-down-1 took: 4.948888ms
Nov 13 03:49:07.419: INFO: Terminating ReplicationController up-down-1 pods took: 100.219156ms
STEP: verifying service up-down-1 is not up
Nov 13 03:49:21.431: INFO: Creating new host exec pod
Nov 13 03:49:21.449: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:23.452: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Nov 13 03:49:23.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.17.29:80 && echo service-down-failed'
Nov 13 03:49:25.903: INFO: rc: 28
Nov 13 03:49:25.903: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.17.29:80 && echo service-down-failed" in pod services-8010/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.17.29:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.17.29:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-8010
STEP: verifying service up-down-2 is still up
Nov 13 03:49:25.914: INFO: Creating new host exec pod
Nov 13 03:49:25.925: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:27.930: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:29.929: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:49:29.929: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:49:37.948: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-host-exec-pod
Nov 13 03:49:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:38.299: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:38.299: INFO: stdout: "up-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\n"
Nov 13 03:49:38.300: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-exec-pod-pqst4
Nov 13 03:49:38.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-exec-pod-pqst4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:38.738: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:38.739: INFO: stdout: "up-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8010
STEP: Deleting pod verify-service-up-exec-pod-pqst4 in namespace services-8010
STEP: creating service up-down-3 in namespace services-8010
STEP: creating service up-down-3 in namespace services-8010
STEP: creating replication controller up-down-3 in namespace services-8010
I1113 03:49:38.762265      25 runners.go:190] Created replication controller with name: up-down-3, namespace: services-8010, replica count: 3
I1113 03:49:41.813510      25 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1113 03:49:44.814052      25 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Nov 13 03:49:44.816: INFO: Creating new host exec pod
Nov 13 03:49:44.831: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:46.837: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:49:46.837: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:49:50.854: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-host-exec-pod
Nov 13 03:49:50.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:51.190: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:51.190: INFO: stdout: "up-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\n"
Nov 13 03:49:51.191: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-exec-pod-t5zbx
Nov 13 03:49:51.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-exec-pod-t5zbx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.23.240:80 2>&1 || true; echo; done'
Nov 13 03:49:51.551: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.23.240:80\n+ echo\n"
Nov 13 03:49:51.552: INFO: stdout: "up-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-zs6bq\nup-down-2-zcxcg\nup-down-2-zcxcg\nup-down-2-zs6bq\nup-down-2-9d7hj\nup-down-2-9d7hj\nup-down-2-zcxcg\nup-down-2-zcxcg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8010
STEP: Deleting pod verify-service-up-exec-pod-t5zbx in namespace services-8010
STEP: verifying service up-down-3 is up
Nov 13 03:49:51.569: INFO: Creating new host exec pod
Nov 13 03:49:51.581: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:53.585: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:55.588: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:57.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:49:59.585: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:50:01.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:50:03.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:50:05.586: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:50:07.590: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Nov 13 03:50:09.586: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Nov 13 03:50:09.586: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Nov 13 03:50:13.605: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.28:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-host-exec-pod
Nov 13 03:50:13.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.28:80 2>&1 || true; echo; done'
Nov 13 03:50:14.004: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n"
Nov 13 03:50:14.004: INFO: stdout: "up-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\n"
Nov 13 03:50:14.004: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.28:80 2>&1 || true; echo; done" in pod services-8010/verify-service-up-exec-pod-bxdtx
Nov 13 03:50:14.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8010 exec verify-service-up-exec-pod-bxdtx -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.60.28:80 2>&1 || true; echo; done'
Nov 13 03:50:14.366: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.60.28:80\n+ echo\n"
Nov 13 03:50:14.367: INFO: stdout: "up-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-6g7v4\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\nup-down-3-6gglc\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-cbcwt\nup-down-3-6g7v4\nup-down-3-6gglc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-8010
STEP: Deleting pod verify-service-up-exec-pod-bxdtx in namespace services-8010
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Nov 13 03:50:14.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8010" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:123.907 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":2,"skipped":405,"failed":0}
Nov 13 03:50:14.396: INFO: Running AfterSuite actions on all nodes


{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork","total":-1,"completed":0,"skipped":662,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork"]}
Nov 13 03:48:36.560: INFO: Running AfterSuite actions on all nodes
Nov 13 03:50:14.457: INFO: Running AfterSuite actions on node 1
Nov 13 03:50:14.457: INFO: Skipping dumping logs from cluster



Summarizing 3 Failures:

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for service endpoints using hostNetwork 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

Ran 27 of 5770 Specs in 219.866 seconds
FAIL! -- 24 Passed | 3 Failed | 0 Pending | 5743 Skipped


Ginkgo ran 1 suite in 3m41.522713589s
Test Suite Failed